Test Report: Docker_Linux_crio_arm64 21664

                    
                      0ce7767ba630d3046e785243932d5087fdf03a88:2025-10-26:42076
                    
                

Test fail (36/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.42
35 TestAddons/parallel/Registry 15.16
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 143.62
38 TestAddons/parallel/InspektorGadget 6.34
39 TestAddons/parallel/MetricsServer 5.4
41 TestAddons/parallel/CSI 41.57
42 TestAddons/parallel/Headlamp 3.89
43 TestAddons/parallel/CloudSpanner 5.34
44 TestAddons/parallel/LocalPath 8.75
45 TestAddons/parallel/NvidiaDevicePlugin 6.29
46 TestAddons/parallel/Yakd 6.28
97 TestFunctional/parallel/ServiceCmdConnect 603.56
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.57
128 TestFunctional/parallel/ServiceCmd/DeployApp 601.13
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
147 TestFunctional/parallel/ServiceCmd/Format 0.41
148 TestFunctional/parallel/ServiceCmd/URL 0.4
190 TestJSONOutput/pause/Command 2.43
196 TestJSONOutput/unpause/Command 2.06
291 TestPause/serial/Pause 8.42
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.49
302 TestStartStop/group/old-k8s-version/serial/Pause 6.67
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.41
315 TestStartStop/group/embed-certs/serial/Pause 6.9
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.01
327 TestStartStop/group/no-preload/serial/Pause 7.21
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.15
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.46
343 TestStartStop/group/newest-cni/serial/Pause 6.1
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.26
x
+
TestAddons/serial/Volcano (0.42s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable volcano --alsologtostderr -v=1: exit status 11 (415.888536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:17:39.177011  722265 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:17:39.177909  722265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:17:39.177952  722265 out.go:374] Setting ErrFile to fd 2...
	I1026 14:17:39.177975  722265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:17:39.178282  722265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:17:39.178973  722265 mustload.go:65] Loading cluster: addons-501661
	I1026 14:17:39.179414  722265 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:17:39.179453  722265 addons.go:606] checking whether the cluster is paused
	I1026 14:17:39.179597  722265 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:17:39.179663  722265 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:17:39.180152  722265 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:17:39.216639  722265 ssh_runner.go:195] Run: systemctl --version
	I1026 14:17:39.216723  722265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:17:39.236745  722265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:17:39.343468  722265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:17:39.343560  722265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:17:39.374089  722265 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:17:39.374114  722265 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:17:39.374119  722265 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:17:39.374129  722265 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:17:39.374133  722265 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:17:39.374156  722265 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:17:39.374165  722265 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:17:39.374170  722265 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:17:39.374173  722265 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:17:39.374180  722265 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:17:39.374192  722265 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:17:39.374195  722265 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:17:39.374198  722265 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:17:39.374201  722265 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:17:39.374205  722265 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:17:39.374217  722265 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:17:39.374235  722265 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:17:39.374240  722265 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:17:39.374243  722265 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:17:39.374248  722265 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:17:39.374253  722265 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:17:39.374256  722265 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:17:39.374259  722265 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:17:39.374262  722265 cri.go:89] found id: ""
	I1026 14:17:39.374328  722265 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:17:39.389442  722265 out.go:203] 
	W1026 14:17:39.392445  722265 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:17:39.392472  722265 out.go:285] * 
	* 
	W1026 14:17:39.502059  722265 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:17:39.505132  722265 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.32691ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003415577s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002989271s
addons_test.go:392: (dbg) Run:  kubectl --context addons-501661 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-501661 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-501661 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.611460306s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 ip
2025/10/26 14:18:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable registry --alsologtostderr -v=1: exit status 11 (271.096835ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:04.690305  722786 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:04.690988  722786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:04.691005  722786 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:04.691012  722786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:04.691289  722786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:04.691582  722786 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:04.691943  722786 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:04.691959  722786 addons.go:606] checking whether the cluster is paused
	I1026 14:18:04.692060  722786 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:04.692072  722786 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:04.692519  722786 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:04.711592  722786 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:04.711652  722786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:04.728626  722786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:04.840472  722786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:04.840582  722786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:04.875933  722786 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:04.875959  722786 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:04.875963  722786 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:04.875967  722786 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:04.875976  722786 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:04.875980  722786 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:04.875984  722786 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:04.875986  722786 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:04.875990  722786 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:04.875996  722786 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:04.875999  722786 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:04.876002  722786 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:04.876006  722786 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:04.876010  722786 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:04.876015  722786 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:04.876020  722786 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:04.876023  722786 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:04.876027  722786 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:04.876030  722786 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:04.876033  722786 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:04.876038  722786 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:04.876044  722786 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:04.876047  722786 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:04.876050  722786 cri.go:89] found id: ""
	I1026 14:18:04.876100  722786 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:04.891270  722786 out.go:203] 
	W1026 14:18:04.894196  722786 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:04.894224  722786 out.go:285] * 
	* 
	W1026 14:18:04.900561  722786 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:04.903480  722786 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.16s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.742859ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-501661
addons_test.go:332: (dbg) Run:  kubectl --context addons-501661 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.148532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:52.949329  724792 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:52.950682  724792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:52.950703  724792 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:52.950710  724792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:52.951101  724792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:52.951445  724792 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:52.951930  724792 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:52.951952  724792 addons.go:606] checking whether the cluster is paused
	I1026 14:18:52.952092  724792 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:52.952109  724792 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:52.952624  724792 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:52.970816  724792 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:52.970885  724792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:52.989319  724792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:53.095656  724792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:53.095756  724792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:53.126326  724792 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:53.126400  724792 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:53.126412  724792 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:53.126418  724792 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:53.126421  724792 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:53.126426  724792 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:53.126429  724792 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:53.126433  724792 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:53.126437  724792 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:53.126444  724792 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:53.126452  724792 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:53.126455  724792 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:53.126459  724792 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:53.126463  724792 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:53.126469  724792 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:53.126486  724792 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:53.126495  724792 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:53.126501  724792 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:53.126504  724792 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:53.126508  724792 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:53.126513  724792 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:53.126517  724792 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:53.126520  724792 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:53.126523  724792 cri.go:89] found id: ""
	I1026 14:18:53.126582  724792 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:53.142222  724792 out.go:203] 
	W1026 14:18:53.145054  724792 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:53.145115  724792 out.go:285] * 
	* 
	W1026 14:18:53.151603  724792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:53.154597  724792 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-501661 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-501661 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-501661 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [43d2df41-da74-47c9-87fd-22e2be1104de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [43d2df41-da74-47c9-87fd-22e2be1104de] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00313152s
I1026 14:18:35.617880  715440 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.602171314s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-501661 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-501661
helpers_test.go:243: (dbg) docker inspect addons-501661:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0",
	        "Created": "2025-10-26T14:15:07.120202821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 716600,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:15:07.183599693Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/hosts",
	        "LogPath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0-json.log",
	        "Name": "/addons-501661",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-501661:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-501661",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0",
	                "LowerDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-501661",
	                "Source": "/var/lib/docker/volumes/addons-501661/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-501661",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-501661",
	                "name.minikube.sigs.k8s.io": "addons-501661",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5bb27cea89a452f483398fba8e83bcc93c8ff35f8316102106d0b8b312d75055",
	            "SandboxKey": "/var/run/docker/netns/5bb27cea89a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-501661": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:08:81:b2:90:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4884fd2f2ecc1d7b1c1eaa3c5ef8ef4f0bdc55395d7ff2fd12eca5ac47f857a9",
	                    "EndpointID": "9b5e65b9add35e1829b44a1c4ab90c4d8e7c0ffa03899b60e23ef68ab250fb09",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-501661",
	                        "33a58f25144b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-501661 -n addons-501661
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-501661 logs -n 25: (1.632744504s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-958542                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-958542 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p binary-mirror-069171 --alsologtostderr --binary-mirror http://127.0.0.1:38609 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-069171   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p binary-mirror-069171                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-069171   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ addons  │ disable dashboard -p addons-501661                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ addons  │ enable dashboard -p addons-501661                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ start   │ -p addons-501661 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:17 UTC │
	│ addons  │ addons-501661 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-501661 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-501661 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-501661 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-501661 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ addons-501661 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-501661 ssh cat /opt/local-path-provisioner/pvc-26a36dca-438b-4339-abca-53d25f00dbaf_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ enable headlamp -p addons-501661 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-501661 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-501661                                                                                                                                                                                                                                                                                                                                                                                           │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ addons-501661 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-501661 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:20 UTC │ 26 Oct 25 14:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:42.055233  716202 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:42.055382  716202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:42.055394  716202 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:42.055399  716202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:42.055724  716202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:14:42.056289  716202 out.go:368] Setting JSON to false
	I1026 14:14:42.057257  716202 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14234,"bootTime":1761473848,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:14:42.057460  716202 start.go:141] virtualization:  
	I1026 14:14:42.061131  716202 out.go:179] * [addons-501661] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 14:14:42.064173  716202 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:14:42.064245  716202 notify.go:220] Checking for updates...
	I1026 14:14:42.070226  716202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:42.073357  716202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:14:42.076929  716202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:14:42.079942  716202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 14:14:42.083137  716202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:14:42.086543  716202 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:42.120541  716202 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:14:42.120728  716202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:42.190403  716202 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 14:14:42.176559719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:42.190543  716202 docker.go:318] overlay module found
	I1026 14:14:42.194003  716202 out.go:179] * Using the docker driver based on user configuration
	I1026 14:14:42.197229  716202 start.go:305] selected driver: docker
	I1026 14:14:42.197265  716202 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:42.197282  716202 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:14:42.198204  716202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:42.264287  716202 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 14:14:42.25345366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:42.264458  716202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:42.264833  716202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:14:42.267814  716202 out.go:179] * Using Docker driver with root privileges
	I1026 14:14:42.270847  716202 cni.go:84] Creating CNI manager for ""
	I1026 14:14:42.270949  716202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:42.270960  716202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:42.271065  716202 start.go:349] cluster config:
	{Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 14:14:42.274434  716202 out.go:179] * Starting "addons-501661" primary control-plane node in "addons-501661" cluster
	I1026 14:14:42.277517  716202 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:42.280575  716202 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:42.283653  716202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:42.283778  716202 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 14:14:42.283796  716202 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:42.283699  716202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:42.283919  716202 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 14:14:42.283930  716202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:14:42.284363  716202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/config.json ...
	I1026 14:14:42.284457  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/config.json: {Name:mk6ffa79d382f43a49c9863fe564896f0de6493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:42.299975  716202 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:42.300149  716202 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:42.300173  716202 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 14:14:42.300182  716202 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 14:14:42.300190  716202 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 14:14:42.300201  716202 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 14:15:00.269195  716202 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 14:15:00.269269  716202 cache.go:232] Successfully downloaded all kic artifacts
	I1026 14:15:00.269306  716202 start.go:360] acquireMachinesLock for addons-501661: {Name:mk5c0728e792ff8d50e668fa90808b2014a3f87e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:15:00.269444  716202 start.go:364] duration metric: took 117.638µs to acquireMachinesLock for "addons-501661"
	I1026 14:15:00.269473  716202 start.go:93] Provisioning new machine with config: &{Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:00.269565  716202 start.go:125] createHost starting for "" (driver="docker")
	I1026 14:15:00.277930  716202 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 14:15:00.278222  716202 start.go:159] libmachine.API.Create for "addons-501661" (driver="docker")
	I1026 14:15:00.278271  716202 client.go:168] LocalClient.Create starting
	I1026 14:15:00.278444  716202 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 14:15:00.692305  716202 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 14:15:01.009911  716202 cli_runner.go:164] Run: docker network inspect addons-501661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 14:15:01.028660  716202 cli_runner.go:211] docker network inspect addons-501661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 14:15:01.028780  716202 network_create.go:284] running [docker network inspect addons-501661] to gather additional debugging logs...
	I1026 14:15:01.028803  716202 cli_runner.go:164] Run: docker network inspect addons-501661
	W1026 14:15:01.047016  716202 cli_runner.go:211] docker network inspect addons-501661 returned with exit code 1
	I1026 14:15:01.047056  716202 network_create.go:287] error running [docker network inspect addons-501661]: docker network inspect addons-501661: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-501661 not found
	I1026 14:15:01.047114  716202 network_create.go:289] output of [docker network inspect addons-501661]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-501661 not found
	
	** /stderr **
	I1026 14:15:01.047270  716202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:15:01.066000  716202 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6c5a0}
	I1026 14:15:01.066040  716202 network_create.go:124] attempt to create docker network addons-501661 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 14:15:01.066097  716202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-501661 addons-501661
	I1026 14:15:01.130861  716202 network_create.go:108] docker network addons-501661 192.168.49.0/24 created
	I1026 14:15:01.130895  716202 kic.go:121] calculated static IP "192.168.49.2" for the "addons-501661" container
	I1026 14:15:01.130987  716202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 14:15:01.153411  716202 cli_runner.go:164] Run: docker volume create addons-501661 --label name.minikube.sigs.k8s.io=addons-501661 --label created_by.minikube.sigs.k8s.io=true
	I1026 14:15:01.175241  716202 oci.go:103] Successfully created a docker volume addons-501661
	I1026 14:15:01.175403  716202 cli_runner.go:164] Run: docker run --rm --name addons-501661-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-501661 --entrypoint /usr/bin/test -v addons-501661:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 14:15:02.610772  716202 cli_runner.go:217] Completed: docker run --rm --name addons-501661-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-501661 --entrypoint /usr/bin/test -v addons-501661:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.43532288s)
	I1026 14:15:02.610832  716202 oci.go:107] Successfully prepared a docker volume addons-501661
	I1026 14:15:02.610876  716202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:02.610904  716202 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 14:15:02.610981  716202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-501661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 14:15:07.049506  716202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-501661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.438482193s)
	I1026 14:15:07.049539  716202 kic.go:203] duration metric: took 4.438632283s to extract preloaded images to volume ...
	W1026 14:15:07.049700  716202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 14:15:07.049835  716202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 14:15:07.104302  716202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-501661 --name addons-501661 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-501661 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-501661 --network addons-501661 --ip 192.168.49.2 --volume addons-501661:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 14:15:07.416804  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Running}}
	I1026 14:15:07.436894  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:07.457468  716202 cli_runner.go:164] Run: docker exec addons-501661 stat /var/lib/dpkg/alternatives/iptables
	I1026 14:15:07.509525  716202 oci.go:144] the created container "addons-501661" has a running status.
	I1026 14:15:07.509555  716202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa...
	I1026 14:15:07.961680  716202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 14:15:07.982119  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:08.000804  716202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 14:15:08.000828  716202 kic_runner.go:114] Args: [docker exec --privileged addons-501661 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 14:15:08.046819  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:08.065983  716202 machine.go:93] provisionDockerMachine start ...
	I1026 14:15:08.066115  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:08.085522  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:08.085917  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:08.085935  716202 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 14:15:08.086638  716202 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 14:15:11.240314  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-501661
	
	I1026 14:15:11.240337  716202 ubuntu.go:182] provisioning hostname "addons-501661"
	I1026 14:15:11.240399  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.257715  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:11.258045  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:11.258061  716202 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-501661 && echo "addons-501661" | sudo tee /etc/hostname
	I1026 14:15:11.413789  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-501661
	
	I1026 14:15:11.413894  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.432403  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:11.432739  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:11.432760  716202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-501661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-501661/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-501661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 14:15:11.580886  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:15:11.580914  716202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 14:15:11.580933  716202 ubuntu.go:190] setting up certificates
	I1026 14:15:11.580960  716202 provision.go:84] configureAuth start
	I1026 14:15:11.581030  716202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-501661
	I1026 14:15:11.599217  716202 provision.go:143] copyHostCerts
	I1026 14:15:11.599331  716202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 14:15:11.599551  716202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 14:15:11.599644  716202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 14:15:11.599721  716202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.addons-501661 san=[127.0.0.1 192.168.49.2 addons-501661 localhost minikube]
	I1026 14:15:11.789428  716202 provision.go:177] copyRemoteCerts
	I1026 14:15:11.789502  716202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 14:15:11.789544  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.807220  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:11.912316  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 14:15:11.929810  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 14:15:11.948586  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 14:15:11.965805  716202 provision.go:87] duration metric: took 384.815466ms to configureAuth
	I1026 14:15:11.965887  716202 ubuntu.go:206] setting minikube options for container-runtime
	I1026 14:15:11.966108  716202 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:11.966252  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.982978  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:11.983310  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:11.983331  716202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 14:15:12.241509  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 14:15:12.241535  716202 machine.go:96] duration metric: took 4.175521394s to provisionDockerMachine
	I1026 14:15:12.241545  716202 client.go:171] duration metric: took 11.963267545s to LocalClient.Create
	I1026 14:15:12.241556  716202 start.go:167] duration metric: took 11.963336928s to libmachine.API.Create "addons-501661"
	I1026 14:15:12.241564  716202 start.go:293] postStartSetup for "addons-501661" (driver="docker")
	I1026 14:15:12.241579  716202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 14:15:12.241642  716202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 14:15:12.241688  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.260095  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.365149  716202 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 14:15:12.368687  716202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 14:15:12.368744  716202 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 14:15:12.368756  716202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 14:15:12.368826  716202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 14:15:12.368854  716202 start.go:296] duration metric: took 127.285041ms for postStartSetup
	I1026 14:15:12.369182  716202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-501661
	I1026 14:15:12.386160  716202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/config.json ...
	I1026 14:15:12.386460  716202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:15:12.386509  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.403921  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.505954  716202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 14:15:12.510867  716202 start.go:128] duration metric: took 12.241284453s to createHost
	I1026 14:15:12.510890  716202 start.go:83] releasing machines lock for "addons-501661", held for 12.2414373s
	I1026 14:15:12.510971  716202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-501661
	I1026 14:15:12.527802  716202 ssh_runner.go:195] Run: cat /version.json
	I1026 14:15:12.527855  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.527887  716202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 14:15:12.527947  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.548368  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.571908  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.745974  716202 ssh_runner.go:195] Run: systemctl --version
	I1026 14:15:12.752629  716202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 14:15:12.789025  716202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 14:15:12.793547  716202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 14:15:12.793626  716202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 14:15:12.821725  716202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 14:15:12.821748  716202 start.go:495] detecting cgroup driver to use...
	I1026 14:15:12.821783  716202 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 14:15:12.821835  716202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 14:15:12.839056  716202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 14:15:12.852617  716202 docker.go:218] disabling cri-docker service (if available) ...
	I1026 14:15:12.852684  716202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 14:15:12.870809  716202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 14:15:12.890211  716202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 14:15:13.006585  716202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 14:15:13.126625  716202 docker.go:234] disabling docker service ...
	I1026 14:15:13.126729  716202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 14:15:13.148968  716202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 14:15:13.162507  716202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 14:15:13.280578  716202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 14:15:13.401911  716202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 14:15:13.415305  716202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 14:15:13.430707  716202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 14:15:13.430826  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.440743  716202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 14:15:13.440860  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.450970  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.460704  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.470294  716202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 14:15:13.478999  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.488399  716202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.502311  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.511952  716202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 14:15:13.519938  716202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 14:15:13.527330  716202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:13.636559  716202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 14:15:13.761926  716202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 14:15:13.762012  716202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 14:15:13.765985  716202 start.go:563] Will wait 60s for crictl version
	I1026 14:15:13.766050  716202 ssh_runner.go:195] Run: which crictl
	I1026 14:15:13.769918  716202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 14:15:13.798334  716202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 14:15:13.798434  716202 ssh_runner.go:195] Run: crio --version
	I1026 14:15:13.827855  716202 ssh_runner.go:195] Run: crio --version
	I1026 14:15:13.859562  716202 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 14:15:13.862275  716202 cli_runner.go:164] Run: docker network inspect addons-501661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:15:13.878673  716202 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 14:15:13.882751  716202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:15:13.893218  716202 kubeadm.go:883] updating cluster {Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 14:15:13.893352  716202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:13.893408  716202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:15:13.927258  716202 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:15:13.927285  716202 crio.go:433] Images already preloaded, skipping extraction
	I1026 14:15:13.927342  716202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:15:13.953521  716202 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:15:13.953548  716202 cache_images.go:85] Images are preloaded, skipping loading
	I1026 14:15:13.953556  716202 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 14:15:13.953707  716202 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-501661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 14:15:13.953820  716202 ssh_runner.go:195] Run: crio config
	I1026 14:15:14.009658  716202 cni.go:84] Creating CNI manager for ""
	I1026 14:15:14.009688  716202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:15:14.009737  716202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 14:15:14.009774  716202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-501661 NodeName:addons-501661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 14:15:14.009932  716202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-501661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 14:15:14.010021  716202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 14:15:14.018937  716202 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 14:15:14.019015  716202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 14:15:14.029910  716202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 14:15:14.044376  716202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 14:15:14.058983  716202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1026 14:15:14.072864  716202 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 14:15:14.076802  716202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:15:14.087360  716202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:14.197726  716202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:14.215714  716202 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661 for IP: 192.168.49.2
	I1026 14:15:14.215775  716202 certs.go:195] generating shared ca certs ...
	I1026 14:15:14.215816  716202 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:14.215996  716202 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 14:15:14.779290  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt ...
	I1026 14:15:14.779327  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt: {Name:mk2c0e7a4e6d1fe9d266ab325b3b3bd561912232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:14.779525  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key ...
	I1026 14:15:14.779538  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key: {Name:mk02d44b794c6056a853f955e32c6a8c5904be50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:14.780533  716202 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 14:15:15.952853  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt ...
	I1026 14:15:15.952886  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt: {Name:mkbbf0e37788070513f9effbcb8e28c9fecaefd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:15.953707  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key ...
	I1026 14:15:15.953728  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key: {Name:mk57df68f99156273f52c3d63d326f096df7d363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:15.954333  716202 certs.go:257] generating profile certs ...
	I1026 14:15:15.954398  716202 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.key
	I1026 14:15:15.954416  716202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt with IP's: []
	I1026 14:15:16.494349  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt ...
	I1026 14:15:16.494387  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: {Name:mk05e07203e8ab24bc5dd6dfb5d764b97f63a6ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:16.494559  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.key ...
	I1026 14:15:16.494572  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.key: {Name:mkf37d9c45f7269fb2b9d04391fe254c04b2102f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:16.494652  716202 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce
	I1026 14:15:16.494675  716202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 14:15:18.064255  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce ...
	I1026 14:15:18.064289  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce: {Name:mkf79e3703563e5002acb2e92656927338d6c675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.065102  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce ...
	I1026 14:15:18.065124  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce: {Name:mk5cba8786b83520a32546a1da36527afa06864d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.065224  716202 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt
	I1026 14:15:18.065310  716202 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key
	I1026 14:15:18.065366  716202 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key
	I1026 14:15:18.065387  716202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt with IP's: []
	I1026 14:15:18.584573  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt ...
	I1026 14:15:18.584606  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt: {Name:mk995d0661ebf3dd1e98494e769b493197ac7fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.584802  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key ...
	I1026 14:15:18.584823  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key: {Name:mk2e4c388b4bf0fa4afee8eb80584493d5022993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.585023  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 14:15:18.585066  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 14:15:18.585090  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 14:15:18.585119  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 14:15:18.585748  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 14:15:18.605838  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 14:15:18.625037  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 14:15:18.643781  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 14:15:18.662808  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 14:15:18.681001  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 14:15:18.698733  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 14:15:18.715992  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 14:15:18.734119  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 14:15:18.755458  716202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 14:15:18.770028  716202 ssh_runner.go:195] Run: openssl version
	I1026 14:15:18.776631  716202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 14:15:18.786381  716202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:15:18.790413  716202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:15:18.790477  716202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:15:18.833779  716202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 14:15:18.842215  716202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 14:15:18.845734  716202 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 14:15:18.845785  716202 kubeadm.go:400] StartCluster: {Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:15:18.845863  716202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:15:18.845928  716202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:15:18.873758  716202 cri.go:89] found id: ""
	I1026 14:15:18.873909  716202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 14:15:18.881728  716202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 14:15:18.889624  716202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 14:15:18.889691  716202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 14:15:18.897763  716202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 14:15:18.897785  716202 kubeadm.go:157] found existing configuration files:
	
	I1026 14:15:18.897840  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 14:15:18.905755  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 14:15:18.905868  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 14:15:18.913465  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 14:15:18.921229  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 14:15:18.921330  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 14:15:18.928863  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 14:15:18.936853  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 14:15:18.936957  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 14:15:18.944357  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 14:15:18.952125  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 14:15:18.952239  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 14:15:18.959819  716202 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 14:15:19.002409  716202 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 14:15:19.002505  716202 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 14:15:19.032914  716202 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 14:15:19.032998  716202 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 14:15:19.033045  716202 kubeadm.go:318] OS: Linux
	I1026 14:15:19.033103  716202 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 14:15:19.033156  716202 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 14:15:19.033213  716202 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 14:15:19.033273  716202 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 14:15:19.033333  716202 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 14:15:19.033387  716202 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 14:15:19.033441  716202 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 14:15:19.033505  716202 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 14:15:19.033569  716202 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 14:15:19.105822  716202 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 14:15:19.105949  716202 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 14:15:19.106051  716202 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 14:15:19.120973  716202 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 14:15:19.127053  716202 out.go:252]   - Generating certificates and keys ...
	I1026 14:15:19.127177  716202 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 14:15:19.127269  716202 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 14:15:19.401969  716202 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 14:15:20.738268  716202 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 14:15:21.499039  716202 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 14:15:22.504830  716202 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 14:15:22.866126  716202 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 14:15:22.866472  716202 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-501661 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:15:23.357362  716202 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 14:15:23.357723  716202 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-501661 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:15:24.121270  716202 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 14:15:24.622975  716202 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 14:15:25.331633  716202 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 14:15:25.331989  716202 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 14:15:25.951164  716202 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 14:15:26.627951  716202 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 14:15:27.204678  716202 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 14:15:27.689972  716202 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 14:15:28.054610  716202 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 14:15:28.055251  716202 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 14:15:28.058055  716202 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 14:15:28.061559  716202 out.go:252]   - Booting up control plane ...
	I1026 14:15:28.061667  716202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 14:15:28.061746  716202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 14:15:28.061817  716202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 14:15:28.077919  716202 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 14:15:28.078055  716202 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 14:15:28.088555  716202 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 14:15:28.088682  716202 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 14:15:28.088779  716202 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 14:15:28.217537  716202 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 14:15:28.217659  716202 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 14:15:29.718791  716202 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501705386s
	I1026 14:15:29.722409  716202 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 14:15:29.722508  716202 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 14:15:29.722603  716202 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 14:15:29.722685  716202 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 14:15:33.956459  716202 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.233577585s
	I1026 14:15:34.870155  716202 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.147740541s
	I1026 14:15:36.725419  716202 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002903362s
	I1026 14:15:36.749105  716202 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 14:15:36.762280  716202 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 14:15:36.777260  716202 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 14:15:36.777577  716202 kubeadm.go:318] [mark-control-plane] Marking the node addons-501661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 14:15:36.789014  716202 kubeadm.go:318] [bootstrap-token] Using token: p427n9.8rczgd3nf4ylhnbd
	I1026 14:15:36.792158  716202 out.go:252]   - Configuring RBAC rules ...
	I1026 14:15:36.792294  716202 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 14:15:36.798811  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 14:15:36.807471  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 14:15:36.811915  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 14:15:36.817593  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 14:15:36.821850  716202 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 14:15:37.132019  716202 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 14:15:37.566118  716202 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 14:15:38.132342  716202 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 14:15:38.133586  716202 kubeadm.go:318] 
	I1026 14:15:38.133665  716202 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 14:15:38.133679  716202 kubeadm.go:318] 
	I1026 14:15:38.133761  716202 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 14:15:38.133786  716202 kubeadm.go:318] 
	I1026 14:15:38.133816  716202 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 14:15:38.133882  716202 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 14:15:38.133939  716202 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 14:15:38.133948  716202 kubeadm.go:318] 
	I1026 14:15:38.134005  716202 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 14:15:38.134014  716202 kubeadm.go:318] 
	I1026 14:15:38.134064  716202 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 14:15:38.134073  716202 kubeadm.go:318] 
	I1026 14:15:38.134128  716202 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 14:15:38.134210  716202 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 14:15:38.134286  716202 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 14:15:38.134294  716202 kubeadm.go:318] 
	I1026 14:15:38.134383  716202 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 14:15:38.134468  716202 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 14:15:38.134476  716202 kubeadm.go:318] 
	I1026 14:15:38.134564  716202 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token p427n9.8rczgd3nf4ylhnbd \
	I1026 14:15:38.134676  716202 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 14:15:38.134704  716202 kubeadm.go:318] 	--control-plane 
	I1026 14:15:38.134713  716202 kubeadm.go:318] 
	I1026 14:15:38.134802  716202 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 14:15:38.134810  716202 kubeadm.go:318] 
	I1026 14:15:38.134896  716202 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token p427n9.8rczgd3nf4ylhnbd \
	I1026 14:15:38.135007  716202 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 14:15:38.137764  716202 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 14:15:38.138002  716202 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 14:15:38.138115  716202 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 14:15:38.138134  716202 cni.go:84] Creating CNI manager for ""
	I1026 14:15:38.138142  716202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:15:38.141234  716202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 14:15:38.144134  716202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 14:15:38.148873  716202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 14:15:38.148943  716202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 14:15:38.163122  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 14:15:38.449130  716202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 14:15:38.449269  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:38.449354  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-501661 minikube.k8s.io/updated_at=2025_10_26T14_15_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=addons-501661 minikube.k8s.io/primary=true
	I1026 14:15:38.469054  716202 ops.go:34] apiserver oom_adj: -16
	I1026 14:15:38.601307  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:39.101891  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:39.601356  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:40.102220  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:40.601638  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:41.102080  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:41.602098  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:42.101589  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:42.602154  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:42.752586  716202 kubeadm.go:1113] duration metric: took 4.303374534s to wait for elevateKubeSystemPrivileges
	I1026 14:15:42.752613  716202 kubeadm.go:402] duration metric: took 23.90683214s to StartCluster
	I1026 14:15:42.752630  716202 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:42.752742  716202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:15:42.753160  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:42.753895  716202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 14:15:42.753929  716202 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:42.754164  716202 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:42.754203  716202 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 14:15:42.754290  716202 addons.go:69] Setting yakd=true in profile "addons-501661"
	I1026 14:15:42.754316  716202 addons.go:238] Setting addon yakd=true in "addons-501661"
	I1026 14:15:42.754345  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.754404  716202 addons.go:69] Setting inspektor-gadget=true in profile "addons-501661"
	I1026 14:15:42.754430  716202 addons.go:238] Setting addon inspektor-gadget=true in "addons-501661"
	I1026 14:15:42.754476  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.754797  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.755005  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.755367  716202 addons.go:69] Setting metrics-server=true in profile "addons-501661"
	I1026 14:15:42.755401  716202 addons.go:238] Setting addon metrics-server=true in "addons-501661"
	I1026 14:15:42.755426  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.755845  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.756398  716202 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-501661"
	I1026 14:15:42.756421  716202 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-501661"
	I1026 14:15:42.756457  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.756915  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.759216  716202 addons.go:69] Setting cloud-spanner=true in profile "addons-501661"
	I1026 14:15:42.759251  716202 addons.go:238] Setting addon cloud-spanner=true in "addons-501661"
	I1026 14:15:42.759319  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.760016  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.761536  716202 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-501661"
	I1026 14:15:42.766540  716202 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-501661"
	I1026 14:15:42.766591  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.767073  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.783233  716202 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-501661"
	I1026 14:15:42.783302  716202 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-501661"
	I1026 14:15:42.783338  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.783829  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766050  716202 addons.go:69] Setting registry=true in profile "addons-501661"
	I1026 14:15:42.784000  716202 addons.go:238] Setting addon registry=true in "addons-501661"
	I1026 14:15:42.784023  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.784423  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766069  716202 addons.go:69] Setting registry-creds=true in profile "addons-501661"
	I1026 14:15:42.789198  716202 addons.go:238] Setting addon registry-creds=true in "addons-501661"
	I1026 14:15:42.789252  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.789709  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766080  716202 addons.go:69] Setting storage-provisioner=true in profile "addons-501661"
	I1026 14:15:42.826857  716202 addons.go:238] Setting addon storage-provisioner=true in "addons-501661"
	I1026 14:15:42.826937  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.827664  716202 addons.go:69] Setting default-storageclass=true in profile "addons-501661"
	I1026 14:15:42.827723  716202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-501661"
	I1026 14:15:42.828123  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766086  716202 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-501661"
	I1026 14:15:42.836155  716202 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-501661"
	I1026 14:15:42.842532  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766096  716202 addons.go:69] Setting volcano=true in profile "addons-501661"
	I1026 14:15:42.861448  716202 addons.go:238] Setting addon volcano=true in "addons-501661"
	I1026 14:15:42.861513  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.862063  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.864828  716202 addons.go:69] Setting gcp-auth=true in profile "addons-501661"
	I1026 14:15:42.864909  716202 mustload.go:65] Loading cluster: addons-501661
	I1026 14:15:42.865154  716202 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:42.865457  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766102  716202 addons.go:69] Setting volumesnapshots=true in profile "addons-501661"
	I1026 14:15:42.881002  716202 addons.go:238] Setting addon volumesnapshots=true in "addons-501661"
	I1026 14:15:42.881077  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.881601  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766335  716202 out.go:179] * Verifying Kubernetes components...
	I1026 14:15:42.887031  716202 addons.go:69] Setting ingress=true in profile "addons-501661"
	I1026 14:15:42.887057  716202 addons.go:238] Setting addon ingress=true in "addons-501661"
	I1026 14:15:42.887110  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.887644  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.892820  716202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:42.908290  716202 addons.go:69] Setting ingress-dns=true in profile "addons-501661"
	I1026 14:15:42.908322  716202 addons.go:238] Setting addon ingress-dns=true in "addons-501661"
	I1026 14:15:42.908467  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.909143  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.931172  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:43.000001  716202 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 14:15:43.003291  716202 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 14:15:43.004071  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 14:15:43.004168  716202 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 14:15:43.008944  716202 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:43.009020  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 14:15:43.009110  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.012300  716202 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:43.012724  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 14:15:43.012908  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.033243  716202 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 14:15:43.058941  716202 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 14:15:43.059095  716202 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 14:15:43.064932  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	W1026 14:15:43.065305  716202 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 14:15:43.071814  716202 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-501661"
	I1026 14:15:43.078766  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:43.079239  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:43.085105  716202 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 14:15:43.090917  716202 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 14:15:43.091623  716202 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 14:15:43.091684  716202 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 14:15:43.091790  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 14:15:43.091887  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.093005  716202 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:43.093021  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 14:15:43.093078  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.071922  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 14:15:43.095971  716202 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 14:15:43.096036  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.099535  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:43.101631  716202 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:43.101686  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 14:15:43.101774  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.071929  716202 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 14:15:43.103412  716202 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 14:15:43.107054  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 14:15:43.107142  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.108335  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 14:15:43.113641  716202 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 14:15:43.113878  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 14:15:43.113947  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 14:15:43.113970  716202 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 14:15:43.114060  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.116639  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 14:15:43.116806  716202 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:43.116819  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 14:15:43.116885  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.150284  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 14:15:43.150308  716202 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 14:15:43.150372  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.185848  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 14:15:43.185936  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 14:15:43.192822  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 14:15:43.195682  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:43.198572  716202 addons.go:238] Setting addon default-storageclass=true in "addons-501661"
	I1026 14:15:43.199466  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:43.199914  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:43.204862  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 14:15:43.205226  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.206313  716202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 14:15:43.205498  716202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 14:15:43.208528  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:43.208655  716202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:43.208666  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 14:15:43.208752  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.233662  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 14:15:43.233693  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 14:15:43.233768  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.255123  716202 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:43.255144  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 14:15:43.255207  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.260530  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.276322  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.289699  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.362310  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.367196  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.381098  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.386342  716202 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 14:15:43.389139  716202 out.go:179]   - Using image docker.io/busybox:stable
	I1026 14:15:43.395219  716202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:43.395242  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 14:15:43.395307  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.395530  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.396316  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.397299  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.436178  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.443164  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.449868  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.452110  716202 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:43.452129  716202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 14:15:43.452201  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	W1026 14:15:43.459443  716202 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:43.459542  716202 retry.go:31] will retry after 180.001402ms: ssh: handshake failed: EOF
	I1026 14:15:43.474657  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.496319  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	W1026 14:15:43.497933  716202 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:43.497960  716202 retry.go:31] will retry after 219.050644ms: ssh: handshake failed: EOF
	I1026 14:15:43.533129  716202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:43.851475  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:43.921024  716202 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:43.921059  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 14:15:44.032982  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:44.088331  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 14:15:44.088358  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 14:15:44.097809  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:44.134051  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:44.141751  716202 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 14:15:44.141818  716202 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 14:15:44.186408  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 14:15:44.186472  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 14:15:44.200180  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:44.276353  716202 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 14:15:44.276419  716202 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 14:15:44.283692  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:44.298954  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:44.302066  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 14:15:44.302135  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 14:15:44.314909  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:44.317038  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 14:15:44.317101  716202 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 14:15:44.319253  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:44.347497  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:44.367298  716202 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:44.367367  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 14:15:44.380613  716202 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 14:15:44.380687  716202 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 14:15:44.462553  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 14:15:44.462626  716202 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 14:15:44.524217  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:44.525432  716202 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 14:15:44.525488  716202 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 14:15:44.545769  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 14:15:44.545849  716202 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 14:15:44.570403  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 14:15:44.570491  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 14:15:44.714114  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 14:15:44.714185  716202 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 14:15:44.734172  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:44.734237  716202 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 14:15:44.735463  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 14:15:44.735519  716202 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 14:15:44.817930  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:44.818005  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 14:15:44.851520  716202 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.644790962s)
	I1026 14:15:44.851686  716202 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 14:15:44.851615  716202 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318461901s)
	I1026 14:15:44.851643  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.000136664s)
	I1026 14:15:44.853456  716202 node_ready.go:35] waiting up to 6m0s for node "addons-501661" to be "Ready" ...
	I1026 14:15:44.853720  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 14:15:44.853736  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 14:15:44.908892  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:45.013769  716202 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:45.013863  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 14:15:45.022658  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:45.133934  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 14:15:45.134021  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 14:15:45.262121  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:45.368529  716202 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-501661" context rescaled to 1 replicas
	I1026 14:15:45.435479  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 14:15:45.435544  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 14:15:45.567168  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 14:15:45.567231  716202 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 14:15:45.790584  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 14:15:45.790659  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 14:15:46.018343  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 14:15:46.018417  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 14:15:46.245768  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:46.245796  716202 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 14:15:46.449259  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1026 14:15:46.886479  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:47.503487  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.470461052s)
	W1026 14:15:47.503529  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:47.503548  716202 retry.go:31] will retry after 229.260003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:47.503616  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.405781778s)
	I1026 14:15:47.503672  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.369544623s)
	I1026 14:15:47.733353  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:47.866124  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.665859479s)
	I1026 14:15:47.866199  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.582444968s)
	I1026 14:15:47.866430  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.567410875s)
	I1026 14:15:47.866485  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.551496838s)
	I1026 14:15:47.866540  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.547227878s)
	W1026 14:15:47.982361  716202 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1026 14:15:49.205258  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.680954568s)
	I1026 14:15:49.205296  716202 addons.go:479] Verifying addon registry=true in "addons-501661"
	I1026 14:15:49.205557  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.296634746s)
	I1026 14:15:49.205575  716202 addons.go:479] Verifying addon metrics-server=true in "addons-501661"
	I1026 14:15:49.205633  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.182900296s)
	I1026 14:15:49.205700  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.858118778s)
	I1026 14:15:49.205719  716202 addons.go:479] Verifying addon ingress=true in "addons-501661"
	I1026 14:15:49.205894  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.943682604s)
	W1026 14:15:49.206522  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:49.206553  716202 retry.go:31] will retry after 344.813051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:49.208816  716202 out.go:179] * Verifying registry addon...
	I1026 14:15:49.208953  716202 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-501661 service yakd-dashboard -n yakd-dashboard
	
	I1026 14:15:49.208974  716202 out.go:179] * Verifying ingress addon...
	I1026 14:15:49.212572  716202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 14:15:49.218382  716202 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 14:15:49.246224  716202 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:49.246251  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.246829  716202 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 14:15:49.246847  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:49.366801  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:49.551927  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:49.732388  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.744140  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.844072  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.394765662s)
	I1026 14:15:49.844160  716202 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-501661"
	I1026 14:15:49.844402  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.11100797s)
	W1026 14:15:49.844451  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:49.844558  716202 retry.go:31] will retry after 213.482896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:49.847673  716202 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 14:15:49.851536  716202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 14:15:49.869315  716202 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:49.869388  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.058871  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:50.217840  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.225549  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.356611  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.710037  716202 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 14:15:50.710142  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:50.729170  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.729235  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.735346  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:50.855156  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.865522  716202 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 14:15:50.882325  716202 addons.go:238] Setting addon gcp-auth=true in "addons-501661"
	I1026 14:15:50.882377  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:50.882823  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:50.901076  716202 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 14:15:50.901136  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:50.925841  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:51.216438  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.228607  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.355613  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.716060  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.722103  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.855182  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:51.857325  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:52.216822  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.222170  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.329122  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.777147292s)
	I1026 14:15:52.329217  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.270298507s)
	W1026 14:15:52.329243  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:52.329261  716202 retry.go:31] will retry after 516.72397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:52.329298  716202 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.42819996s)
	I1026 14:15:52.332269  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:52.335131  716202 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 14:15:52.337953  716202 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 14:15:52.337972  716202 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 14:15:52.351647  716202 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 14:15:52.351934  716202 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 14:15:52.355889  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.367979  716202 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:52.368003  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 14:15:52.381717  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:52.719876  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.786164  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.846900  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:52.884382  716202 addons.go:479] Verifying addon gcp-auth=true in "addons-501661"
	I1026 14:15:52.885942  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.887644  716202 out.go:179] * Verifying gcp-auth addon...
	I1026 14:15:52.891303  716202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 14:15:52.901785  716202 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 14:15:52.901805  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.216352  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.221421  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.355731  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.394834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:53.701731  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:53.701765  716202 retry.go:31] will retry after 707.370273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:53.716173  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.722414  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.856340  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.895250  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.216139  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.225672  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.354809  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:54.356934  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:54.394812  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.410182  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:54.715814  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.722606  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.856663  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.894168  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.216409  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.227433  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:55.240380  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:55.240410  716202 retry.go:31] will retry after 1.206291057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:55.355621  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.394630  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.715785  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.721722  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.854916  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.894809  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.216036  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.222902  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.355157  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:56.357438  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:56.395140  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.447223  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:56.715834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.722204  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.858074  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.895229  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.216556  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.225293  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:57.260166  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:57.260196  716202 retry.go:31] will retry after 1.06760712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:57.355325  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.394090  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.716803  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.721691  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.854262  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.894246  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.215348  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.229836  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.328214  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:58.357513  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:58.358437  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.395050  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.715980  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.722207  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.856444  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.894018  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:59.138895  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:59.138941  716202 retry.go:31] will retry after 2.453558555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:59.215579  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.221421  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.355665  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.394558  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.715911  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.721783  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.854590  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.894609  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.282485  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.283118  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.359816  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:00.359800  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:00.395070  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.716352  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.722398  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.855288  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.894995  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.216348  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.228345  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.355373  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.394008  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.593343  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:01.716260  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.722487  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.856794  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.895069  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.215676  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.221913  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.356918  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.395647  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:02.407365  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:02.407398  716202 retry.go:31] will retry after 5.142719519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:02.716305  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.722070  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.855059  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:02.856973  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:02.894674  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.216133  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.222509  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.354893  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.394138  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.717162  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.722114  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.854999  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.894824  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.215684  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.227336  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.356516  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.394317  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.716534  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.721651  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.854342  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.895332  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.215986  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.225857  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.354799  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:05.357328  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:05.394282  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.715835  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.721545  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.855125  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.894877  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.216117  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.226487  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.354920  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.407876  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.716218  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.721997  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.855210  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.895265  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.216490  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.222638  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.355716  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.394528  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.550749  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:07.716732  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.727931  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.856188  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:07.859660  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:07.895108  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.216095  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.221988  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.358225  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:08.376075  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:08.376164  716202 retry.go:31] will retry after 6.878255973s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:08.394800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.715997  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.721734  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.854575  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.894626  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.215754  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.227692  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.354637  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.395532  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.716100  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.722401  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.855129  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.894525  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.215444  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.225624  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.355750  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:10.358365  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:10.394968  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.715562  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.721134  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.854989  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.895018  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.216198  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.222698  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.354519  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.394343  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.715947  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.721908  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.855953  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.894800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.215933  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.221780  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.354518  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.394697  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.715495  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.721322  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.855429  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:12.856315  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:12.894264  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.215683  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.221856  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.354989  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.395230  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.716250  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.722334  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.855519  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.894584  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.215299  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.222131  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.354958  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.394608  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.715845  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.721680  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.854461  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:14.856778  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:14.894774  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.216076  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:15.222591  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.254921  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:15.356006  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.394111  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.716328  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:15.722042  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.860196  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.895490  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:16.093097  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:16.093133  716202 retry.go:31] will retry after 10.749955074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:16.215888  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:16.225612  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.356255  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.395470  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.715515  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:16.721150  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.854982  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:16.857176  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:16.895064  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.216664  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:17.227619  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.354661  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.395273  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.715447  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:17.722217  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.855165  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.894289  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.216346  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:18.222187  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.355771  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.394982  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.716423  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:18.721455  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.855569  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:18.857714  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:18.894570  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.215831  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:19.223789  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.354912  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.394667  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.715571  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:19.721423  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.855346  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.894397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:20.215561  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:20.229153  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.354871  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.394793  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:20.716164  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:20.722050  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.855847  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.894918  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:21.215798  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:21.222059  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.355670  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:21.356114  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:21.394993  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:21.716408  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:21.722136  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.856689  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.894898  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:22.216080  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:22.225724  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.354882  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.394260  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:22.716447  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:22.721384  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.856198  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.894926  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:23.216001  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:23.225696  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.371888  716202 node_ready.go:49] node "addons-501661" is "Ready"
	I1026 14:16:23.371918  716202 node_ready.go:38] duration metric: took 38.518266051s for node "addons-501661" to be "Ready" ...
	I1026 14:16:23.371933  716202 api_server.go:52] waiting for apiserver process to appear ...
	I1026 14:16:23.372014  716202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:16:23.396671  716202 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:16:23.396727  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.410186  716202 api_server.go:72] duration metric: took 40.656211973s to wait for apiserver process to appear ...
	I1026 14:16:23.410213  716202 api_server.go:88] waiting for apiserver healthz status ...
	I1026 14:16:23.410232  716202 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 14:16:23.434616  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:23.440772  716202 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 14:16:23.447160  716202 api_server.go:141] control plane version: v1.34.1
	I1026 14:16:23.447206  716202 api_server.go:131] duration metric: took 36.975658ms to wait for apiserver health ...
	I1026 14:16:23.447232  716202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 14:16:23.477172  716202 system_pods.go:59] 19 kube-system pods found
	I1026 14:16:23.477208  716202 system_pods.go:61] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending
	I1026 14:16:23.477241  716202 system_pods.go:61] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending
	I1026 14:16:23.477253  716202 system_pods.go:61] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending
	I1026 14:16:23.477260  716202 system_pods.go:61] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending
	I1026 14:16:23.477265  716202 system_pods.go:61] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:23.477269  716202 system_pods.go:61] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:23.477274  716202 system_pods.go:61] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:23.477303  716202 system_pods.go:61] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:23.477314  716202 system_pods.go:61] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending
	I1026 14:16:23.477319  716202 system_pods.go:61] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:23.477324  716202 system_pods.go:61] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:23.477343  716202 system_pods.go:61] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:23.477355  716202 system_pods.go:61] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending
	I1026 14:16:23.477361  716202 system_pods.go:61] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending
	I1026 14:16:23.477378  716202 system_pods.go:61] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending
	I1026 14:16:23.477390  716202 system_pods.go:61] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending
	I1026 14:16:23.477396  716202 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending
	I1026 14:16:23.477411  716202 system_pods.go:61] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending
	I1026 14:16:23.477424  716202 system_pods.go:61] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending
	I1026 14:16:23.477430  716202 system_pods.go:74] duration metric: took 30.184012ms to wait for pod list to return data ...
	I1026 14:16:23.477455  716202 default_sa.go:34] waiting for default service account to be created ...
	I1026 14:16:23.506250  716202 default_sa.go:45] found service account: "default"
	I1026 14:16:23.506279  716202 default_sa.go:55] duration metric: took 28.812458ms for default service account to be created ...
	I1026 14:16:23.506299  716202 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 14:16:23.514922  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:23.514954  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending
	I1026 14:16:23.514960  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending
	I1026 14:16:23.514965  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending
	I1026 14:16:23.514969  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending
	I1026 14:16:23.514972  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:23.515009  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:23.515019  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:23.515024  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:23.515028  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending
	I1026 14:16:23.515032  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:23.515041  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:23.515051  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:23.515058  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending
	I1026 14:16:23.515087  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending
	I1026 14:16:23.515091  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending
	I1026 14:16:23.515106  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending
	I1026 14:16:23.515119  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending
	I1026 14:16:23.515123  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending
	I1026 14:16:23.515127  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending
	I1026 14:16:23.515159  716202 retry.go:31] will retry after 286.658676ms: missing components: kube-dns
	I1026 14:16:23.782747  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.782771  716202 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:16:23.782784  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:23.834637  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:23.834683  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:23.834693  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:23.834699  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending
	I1026 14:16:23.834737  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending
	I1026 14:16:23.834742  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:23.834747  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:23.834757  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:23.834761  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:23.834769  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending
	I1026 14:16:23.834798  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:23.834815  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:23.834828  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:23.834833  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending
	I1026 14:16:23.834840  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:23.834849  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:23.834853  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending
	I1026 14:16:23.834857  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending
	I1026 14:16:23.834881  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending
	I1026 14:16:23.834891  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:16:23.834906  716202 retry.go:31] will retry after 363.438345ms: missing components: kube-dns
	I1026 14:16:23.871457  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.900412  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:24.208353  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:24.208410  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:24.208439  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:24.208457  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:16:24.208466  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:16:24.208475  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:24.208496  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:24.208507  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:24.208511  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:24.208528  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:24.208541  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:24.208546  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:24.208552  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:24.208576  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:24.208588  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:24.208596  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:24.208611  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:24.208618  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.208630  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.208649  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:16:24.208678  716202 retry.go:31] will retry after 438.728691ms: missing components: kube-dns
	I1026 14:16:24.224275  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:24.224906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:24.357877  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.475417  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:24.654845  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:24.654924  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:24.654950  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:24.654973  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:16:24.655022  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:16:24.655043  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:24.655062  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:24.655090  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:24.655111  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:24.655130  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:24.655153  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:24.655179  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:24.655199  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:24.655221  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:24.655245  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:24.655265  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:24.655288  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:24.655312  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.655331  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.655349  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:16:24.655377  716202 retry.go:31] will retry after 540.807434ms: missing components: kube-dns
	I1026 14:16:24.745840  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:24.746351  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:24.865221  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.902436  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:25.203524  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:25.203616  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Running
	I1026 14:16:25.203644  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:25.203672  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:16:25.203699  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:16:25.203718  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:25.203735  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:25.203759  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:25.203779  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:25.203805  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:25.203823  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:25.203843  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:25.203864  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:25.203885  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:25.203906  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:25.203936  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:25.203957  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:25.203977  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:25.204001  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:25.204019  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Running
	I1026 14:16:25.204042  716202 system_pods.go:126] duration metric: took 1.697736705s to wait for k8s-apps to be running ...
	I1026 14:16:25.204062  716202 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 14:16:25.204141  716202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:16:25.217174  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:25.219412  716202 system_svc.go:56] duration metric: took 15.342715ms WaitForService to wait for kubelet
	I1026 14:16:25.219497  716202 kubeadm.go:586] duration metric: took 42.465538074s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:16:25.219531  716202 node_conditions.go:102] verifying NodePressure condition ...
	I1026 14:16:25.227910  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:25.231735  716202 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 14:16:25.231813  716202 node_conditions.go:123] node cpu capacity is 2
	I1026 14:16:25.231840  716202 node_conditions.go:105] duration metric: took 12.287672ms to run NodePressure ...
	I1026 14:16:25.231873  716202 start.go:241] waiting for startup goroutines ...
	I1026 14:16:25.355644  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.394846  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:25.715893  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:25.721959  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:25.858452  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.895166  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:26.218012  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:26.223075  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:26.362777  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.461348  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:26.719292  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:26.723810  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:26.844181  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:26.860547  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.894672  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:27.216629  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:27.246683  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:27.355016  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:27.394934  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:27.716181  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:27.722229  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:27.855897  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:27.895010  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:28.216742  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:28.229384  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.260387  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.416166033s)
	W1026 14:16:28.260427  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:28.260445  716202 retry.go:31] will retry after 18.725162743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:28.355892  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:28.395474  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:28.716012  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:28.721917  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.855649  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:28.897038  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:29.216977  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.221702  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:29.355121  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:29.395181  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:29.716513  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.721732  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:29.855304  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:29.894200  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.217414  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:30.221855  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:30.357186  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:30.457065  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.716227  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:30.721910  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:30.855245  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:30.894175  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:31.216646  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:31.225606  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:31.361855  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:31.395510  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:31.715920  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:31.722221  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:31.856101  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:31.895323  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:32.216167  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:32.228508  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:32.355027  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:32.395043  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:32.718193  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:32.722943  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:32.856985  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:32.895976  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:33.216779  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:33.225369  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:33.355485  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:33.394475  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:33.715804  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:33.722496  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:33.855703  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:33.894819  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:34.216103  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:34.222571  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:34.355330  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:34.395614  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:34.715946  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:34.723049  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:34.855602  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:34.894349  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:35.217274  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:35.226724  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:35.356504  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:35.456075  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:35.716396  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:35.722275  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:35.855872  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:35.956307  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:36.216633  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:36.222097  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:36.356093  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:36.395860  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:36.716633  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:36.723489  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:36.858181  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:36.896463  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:37.217458  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:37.222037  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:37.357088  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:37.395456  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:37.716782  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:37.722964  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:37.863737  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:37.895460  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.216998  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:38.222957  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:38.356124  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:38.395781  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.716154  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:38.721864  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:38.854834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:38.894514  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:39.215906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:39.221794  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.356011  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:39.394673  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:39.715647  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:39.721650  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.854644  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:39.894297  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:40.215684  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:40.221427  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:40.356291  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:40.394286  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:40.716097  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:40.721990  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:40.855825  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:40.895209  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:41.216592  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:41.221651  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:41.355042  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:41.395321  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:41.716194  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:41.722743  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:41.856099  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:41.895566  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:42.217508  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:42.223423  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:42.356382  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:42.394158  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:42.718391  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:42.722193  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:42.855071  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:42.894622  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:43.215411  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:43.222777  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:43.355520  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:43.394918  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:43.717776  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:43.723608  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:43.855523  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:43.895822  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:44.216184  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:44.223230  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:44.358151  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:44.396528  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:44.715739  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:44.721653  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:44.854659  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:44.894575  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:45.217986  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:45.222583  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:45.355036  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:45.394875  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:45.716334  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:45.721733  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:45.855401  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:45.894534  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.216152  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:46.222889  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:46.355283  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:46.394541  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.715714  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:46.721753  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:46.854990  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:46.895666  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.985730  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:47.217233  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:47.222718  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:47.355061  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:47.395802  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:47.717024  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:47.722148  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:47.855800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:47.894521  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.075306  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.089535563s)
	W1026 14:16:48.075399  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:48.075431  716202 retry.go:31] will retry after 30.211838015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:48.216539  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:48.221651  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:48.355282  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:48.394119  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.717111  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:48.722244  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:48.855737  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:48.894812  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:49.216387  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:49.222806  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:49.355335  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:49.394159  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:49.717271  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:49.725416  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:49.855290  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:49.894160  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:50.216733  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:50.221892  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:50.355152  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:50.395232  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:50.717228  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:50.722349  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:50.855937  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:50.894834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:51.216567  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:51.221706  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:51.355274  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:51.395555  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:51.716092  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:51.722624  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:51.855923  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:51.895402  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:52.217069  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:52.223472  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:52.357551  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:52.395341  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:52.716566  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:52.721898  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:52.855302  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:52.895397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:53.216227  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:53.222786  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:53.355503  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:53.394931  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:53.715598  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:53.721158  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:53.855901  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:53.894937  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:54.216780  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:54.222967  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:54.355344  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:54.394310  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:54.716287  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:54.722033  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:54.855008  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:54.894454  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:55.215720  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:55.222350  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:55.357572  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:55.394676  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:55.715536  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:55.721550  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:55.856151  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:55.895474  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:56.217033  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:56.222716  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:56.356052  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:56.456005  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:56.717560  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:56.722503  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:56.856431  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:56.895957  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:57.217288  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:57.223993  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:57.355812  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:57.394860  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:57.715850  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:57.722062  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:57.855016  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:57.895058  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:58.216676  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:58.221812  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:58.355080  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:58.395065  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:58.716742  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:58.721979  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:58.855491  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:58.894552  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:59.215796  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:59.222093  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:59.355949  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:59.395035  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:59.717560  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:59.721440  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:59.855735  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:59.894617  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:00.242688  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:00.249942  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:00.363437  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:00.413357  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:00.717218  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:00.723133  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:00.856815  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:00.895771  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:01.216371  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:01.225015  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:01.356819  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:01.395848  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:01.717518  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:01.722363  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:01.857153  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:01.896360  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:02.216804  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:02.223192  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:02.356445  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:02.395685  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:02.717275  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:02.723061  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:02.855980  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:02.895467  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:03.216626  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:03.222078  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:03.355641  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:03.395340  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:03.717361  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:03.723328  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:03.856502  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:03.895709  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:04.216819  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:04.222703  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:04.355570  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:04.394452  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:04.717365  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:04.722488  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:04.860288  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:04.964082  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:05.217908  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:05.228826  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:05.356443  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:05.395270  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:05.716800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:05.722189  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:05.855447  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:05.895421  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:06.218100  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:06.229801  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:06.355079  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:06.396019  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:06.717139  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:06.722543  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:06.857058  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:06.897342  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:07.215919  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:07.231921  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:07.356555  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:07.395345  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:07.733637  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:07.733738  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:07.855383  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:07.894792  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:08.216675  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:08.221883  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:08.356007  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:08.394635  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:08.716255  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:08.722456  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:08.854996  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:08.895163  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:09.217407  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:09.227271  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:09.356167  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:09.395407  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:09.715900  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:09.721561  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:09.854809  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:09.895133  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:10.217065  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:10.222089  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:10.355433  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:10.394543  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:10.715834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:10.721682  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:10.854906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:10.894900  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:11.216373  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:11.227692  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:11.355086  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:11.394924  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:11.716466  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:11.721310  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:11.855270  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:11.894825  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:12.216624  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:12.221823  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:12.355661  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:12.394373  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:12.717683  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:12.721417  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:12.855620  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:12.894906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:13.216364  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:13.229448  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:13.357321  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:13.394285  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:13.716945  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:13.722313  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:13.856119  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:13.895858  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:14.216490  716202 kapi.go:107] duration metric: took 1m25.003915701s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 14:17:14.221416  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:14.355624  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:14.395127  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:14.723156  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:14.855377  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:14.894262  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.222478  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:15.357985  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:15.395179  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.721904  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:15.855147  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:15.895623  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:16.221929  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:16.355303  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:16.395006  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:16.723195  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:16.856025  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:16.955589  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:17.242252  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:17.356896  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:17.399683  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:17.722173  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:17.855684  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:17.894884  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:18.230456  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:18.287811  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:17:18.355087  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:18.395450  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:18.721924  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:18.855378  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:18.895381  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:19.234419  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:19.356526  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:19.394495  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:19.679918  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.392026421s)
	W1026 14:17:19.680008  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:17:19.680133  716202 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 14:17:19.722167  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:19.855689  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:19.894625  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:20.222559  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:20.354973  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:20.399476  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:20.722070  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:20.856525  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:20.908821  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:21.228151  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:21.355657  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:21.394494  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:21.722167  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:21.855397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:21.898085  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:22.232339  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.363740  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:22.403873  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:22.722336  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.855724  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:22.895152  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:23.221689  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:23.355492  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:23.394669  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:23.725577  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:23.854859  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:23.894868  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:24.230793  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:24.355417  716202 kapi.go:107] duration metric: took 1m34.503877851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 14:17:24.395930  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:24.722780  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:24.895126  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:25.223566  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:25.394715  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:25.722610  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:25.894699  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:26.222956  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:26.395461  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:26.722011  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:26.895086  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:27.226295  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:27.394876  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:27.722628  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:27.894609  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:28.222223  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:28.394675  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:28.723231  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:28.896045  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:29.222794  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:29.395444  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:29.722033  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:29.895169  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:30.222081  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:30.395489  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:30.722888  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:30.895397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:31.221733  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:31.395339  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:31.722135  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:31.894292  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:32.222733  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:32.395243  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:32.721781  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:32.895533  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:33.222178  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:33.394323  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:33.722604  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:33.895326  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:34.237847  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:34.397382  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:34.721902  716202 kapi.go:107] duration metric: took 1m45.5035165s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 14:17:34.895028  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:35.394328  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:35.895322  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:36.396519  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:36.899743  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:37.394525  716202 kapi.go:107] duration metric: took 1m44.503223137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 14:17:37.397340  716202 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-501661 cluster.
	I1026 14:17:37.400174  716202 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 14:17:37.402977  716202 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 14:17:37.405963  716202 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1026 14:17:37.408912  716202 addons.go:514] duration metric: took 1m54.654688472s for enable addons: enabled=[amd-gpu-device-plugin registry-creds cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1026 14:17:37.408972  716202 start.go:246] waiting for cluster config update ...
	I1026 14:17:37.408994  716202 start.go:255] writing updated cluster config ...
	I1026 14:17:37.409313  716202 ssh_runner.go:195] Run: rm -f paused
	I1026 14:17:37.412992  716202 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:17:37.495636  716202 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5nrx2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.503613  716202 pod_ready.go:94] pod "coredns-66bc5c9577-5nrx2" is "Ready"
	I1026 14:17:37.503644  716202 pod_ready.go:86] duration metric: took 7.977333ms for pod "coredns-66bc5c9577-5nrx2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.506148  716202 pod_ready.go:83] waiting for pod "etcd-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.511640  716202 pod_ready.go:94] pod "etcd-addons-501661" is "Ready"
	I1026 14:17:37.511715  716202 pod_ready.go:86] duration metric: took 5.537177ms for pod "etcd-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.514407  716202 pod_ready.go:83] waiting for pod "kube-apiserver-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.519914  716202 pod_ready.go:94] pod "kube-apiserver-addons-501661" is "Ready"
	I1026 14:17:37.519942  716202 pod_ready.go:86] duration metric: took 5.510428ms for pod "kube-apiserver-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.523200  716202 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.816869  716202 pod_ready.go:94] pod "kube-controller-manager-addons-501661" is "Ready"
	I1026 14:17:37.816901  716202 pod_ready.go:86] duration metric: took 293.672864ms for pod "kube-controller-manager-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:38.023383  716202 pod_ready.go:83] waiting for pod "kube-proxy-rxl4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:38.417690  716202 pod_ready.go:94] pod "kube-proxy-rxl4x" is "Ready"
	I1026 14:17:38.417717  716202 pod_ready.go:86] duration metric: took 394.251812ms for pod "kube-proxy-rxl4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:38.617729  716202 pod_ready.go:83] waiting for pod "kube-scheduler-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:39.016800  716202 pod_ready.go:94] pod "kube-scheduler-addons-501661" is "Ready"
	I1026 14:17:39.016830  716202 pod_ready.go:86] duration metric: took 399.073505ms for pod "kube-scheduler-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:39.016848  716202 pod_ready.go:40] duration metric: took 1.603819285s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:17:39.075659  716202 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 14:17:39.078910  716202 out.go:179] * Done! kubectl is now configured to use "addons-501661" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 14:20:37 addons-501661 crio[828]: time="2025-10-26T14:20:37.677166618Z" level=info msg="Removed container bc5f53f161c58151217e5eac448928299391f3ca9d65c0b373d801fd8d128dfd: kube-system/registry-creds-764b6fb674-2fxp4/registry-creds" id=05de17cf-a29d-4318-94ee-a6f90b5156bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.842910859Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-fznqt/POD" id=2fa7832e-72f0-4231-a17e-73b1e780129c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.842987504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.887651771Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-fznqt Namespace:default ID:ff0e66ef3a894821748f57f8c57a8f73a15dc23681d7cd0db5a5601bc618811d UID:6ed7f633-1859-4cfe-80da-0633ea377425 NetNS:/var/run/netns/18a6b65e-6e28-4f4f-b7d5-b191a9ec7116 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c80c0}] Aliases:map[]}"
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.887872942Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-fznqt to CNI network \"kindnet\" (type=ptp)"
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.900830079Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-fznqt Namespace:default ID:ff0e66ef3a894821748f57f8c57a8f73a15dc23681d7cd0db5a5601bc618811d UID:6ed7f633-1859-4cfe-80da-0633ea377425 NetNS:/var/run/netns/18a6b65e-6e28-4f4f-b7d5-b191a9ec7116 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c80c0}] Aliases:map[]}"
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.901185372Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-fznqt for CNI network kindnet (type=ptp)"
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.905444408Z" level=info msg="Ran pod sandbox ff0e66ef3a894821748f57f8c57a8f73a15dc23681d7cd0db5a5601bc618811d with infra container: default/hello-world-app-5d498dc89-fznqt/POD" id=2fa7832e-72f0-4231-a17e-73b1e780129c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.910484536Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=162e17b5-aab4-432f-b04a-8648c8ad6926 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.910610396Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=162e17b5-aab4-432f-b04a-8648c8ad6926 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.910645982Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=162e17b5-aab4-432f-b04a-8648c8ad6926 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.917808884Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c79c8dd9-dce5-46fd-b46a-619d44216f2c name=/runtime.v1.ImageService/PullImage
	Oct 26 14:20:45 addons-501661 crio[828]: time="2025-10-26T14:20:45.91970288Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.572367642Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=c79c8dd9-dce5-46fd-b46a-619d44216f2c name=/runtime.v1.ImageService/PullImage
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.572965144Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c242be7c-c08e-48d6-bf22-74d3ee0d1c70 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.576396423Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=92ac5e95-c014-47b4-81ce-616501bfe4df name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.589059905Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-fznqt/hello-world-app" id=8c9b8aaa-ed13-459b-b00e-4b919b5f6501 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.589198851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.598457515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.598671359Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a467877bd37bad45576dac74c1a6c921a18eb4424400a604657e6d61bf5336e2/merged/etc/passwd: no such file or directory"
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.598696319Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a467877bd37bad45576dac74c1a6c921a18eb4424400a604657e6d61bf5336e2/merged/etc/group: no such file or directory"
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.599002044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.621893854Z" level=info msg="Created container 83e688b005767b9ca45cb9b526c5aaece6c12fe1ff5e09803a88d7d561fc2c77: default/hello-world-app-5d498dc89-fznqt/hello-world-app" id=8c9b8aaa-ed13-459b-b00e-4b919b5f6501 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.622620161Z" level=info msg="Starting container: 83e688b005767b9ca45cb9b526c5aaece6c12fe1ff5e09803a88d7d561fc2c77" id=a4d98a48-3abc-4d87-a61f-152d112770fd name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:20:46 addons-501661 crio[828]: time="2025-10-26T14:20:46.624647286Z" level=info msg="Started container" PID=7323 containerID=83e688b005767b9ca45cb9b526c5aaece6c12fe1ff5e09803a88d7d561fc2c77 description=default/hello-world-app-5d498dc89-fznqt/hello-world-app id=a4d98a48-3abc-4d87-a61f-152d112770fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff0e66ef3a894821748f57f8c57a8f73a15dc23681d7cd0db5a5601bc618811d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	83e688b005767       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   ff0e66ef3a894       hello-world-app-5d498dc89-fznqt             default
	d5c30369b6ee0       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           1                   fe684bf53836a       registry-creds-764b6fb674-2fxp4             kube-system
	50657f38f808d       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   7862603137f67       nginx                                       default
	1a2edd4bfbc59       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   6f61f35080e24       busybox                                     default
	9870f74d88fd1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ad802ef770dce       gcp-auth-78565c9fb4-vzzg7                   gcp-auth
	0d0f4ac4419c2       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   259bd70bfa23a       ingress-nginx-controller-675c5ddd98-hrnwk   ingress-nginx
	c4ec9e9442876       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                    kube-system
	0c73c42d96770       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                    kube-system
	c50e91d190b6b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                    kube-system
	a850489f8b2c4       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                    kube-system
	e326676ba82b9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                    kube-system
	42eeb20fc4611       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   ebfed1fd0c284       gadget-2t2bm                                gadget
	9be4b4714f4a0       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    3                   8af895ee8a903       ingress-nginx-admission-patch-qmxvf         ingress-nginx
	e7b0defbfd9a0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   2d4c1286a2f8c       registry-proxy-26bjw                        kube-system
	ec7c2286fab64       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   337e1541c58c1       snapshot-controller-7d9fbc56b8-hpxf6        kube-system
	6b9afdcd645ac       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   41f92b9da6aa9       csi-hostpath-resizer-0                      kube-system
	eddafdd69a2fd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                    kube-system
	f11053563b42d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   d057682f3b18d       snapshot-controller-7d9fbc56b8-dbl7s        kube-system
	7164bdbbcc6c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   25c02f0f71529       ingress-nginx-admission-create-pptg4        ingress-nginx
	82e271218789e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   3d88b63f4857e       kube-ingress-dns-minikube                   kube-system
	b7521966d45ca       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   fcc067a694402       local-path-provisioner-648f6765c9-4fmxv     local-path-storage
	613e459325bad       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   198b16435060c       yakd-dashboard-5ff678cb9-bdtjs              yakd-dashboard
	637c3d5659f24       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   35f6f265994db       csi-hostpath-attacher-0                     kube-system
	7d68d150ab8c2       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   2fa214e6160e4       nvidia-device-plugin-daemonset-j5x9f        kube-system
	755c0c31c073f       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   b56d989a649f4       cloud-spanner-emulator-86bd5cbb97-rt9p8     default
	65de879233549       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   705b0269b42c9       registry-6b586f9694-ndtxx                   kube-system
	c136798b61600       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   491d5e4e25263       metrics-server-85b7d694d7-ljcz5             kube-system
	53981aeb4a23e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   82a479e89e2bb       coredns-66bc5c9577-5nrx2                    kube-system
	ffb41f5a461fd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   06063f345d464       storage-provisioner                         kube-system
	44bf385182957       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   7f7acc02cd1a2       kindnet-wggwr                               kube-system
	2b96a203a94a6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   96a06c342c64e       kube-proxy-rxl4x                            kube-system
	b4c2f12d53270       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   222cb3dfcf334       etcd-addons-501661                          kube-system
	fb9eabe84a99f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   cb2d5f0fb119b       kube-scheduler-addons-501661                kube-system
	ebd8af71508b5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   dc3ab474c62ca       kube-apiserver-addons-501661                kube-system
	90535ff6ce64e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   e8d8eea3c7820       kube-controller-manager-addons-501661       kube-system
	
	
	==> coredns [53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76] <==
	[INFO] 10.244.0.17:47525 - 60777 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001624445s
	[INFO] 10.244.0.17:47525 - 43119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000118171s
	[INFO] 10.244.0.17:47525 - 17933 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000087648s
	[INFO] 10.244.0.17:41900 - 48821 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188629s
	[INFO] 10.244.0.17:41900 - 48567 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000247608s
	[INFO] 10.244.0.17:40949 - 4273 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010419s
	[INFO] 10.244.0.17:40949 - 4076 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179292s
	[INFO] 10.244.0.17:48133 - 37232 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107939s
	[INFO] 10.244.0.17:48133 - 37035 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126696s
	[INFO] 10.244.0.17:53295 - 65408 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001326827s
	[INFO] 10.244.0.17:53295 - 77 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001326573s
	[INFO] 10.244.0.17:33212 - 30323 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158967s
	[INFO] 10.244.0.17:33212 - 30184 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000216723s
	[INFO] 10.244.0.21:34949 - 17830 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154183s
	[INFO] 10.244.0.21:49462 - 35773 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000075701s
	[INFO] 10.244.0.21:57770 - 55207 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121592s
	[INFO] 10.244.0.21:59139 - 26975 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000072608s
	[INFO] 10.244.0.21:50634 - 40941 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084718s
	[INFO] 10.244.0.21:51166 - 20209 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081321s
	[INFO] 10.244.0.21:38188 - 63625 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002106926s
	[INFO] 10.244.0.21:56940 - 55435 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002170903s
	[INFO] 10.244.0.21:56319 - 15814 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001378208s
	[INFO] 10.244.0.21:45914 - 64890 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002121532s
	[INFO] 10.244.0.23:59919 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000192625s
	[INFO] 10.244.0.23:38208 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000177322s
	
	
	==> describe nodes <==
	Name:               addons-501661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-501661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=addons-501661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-501661
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-501661"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:15:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-501661
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:20:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:20:43 +0000   Sun, 26 Oct 2025 14:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:20:43 +0000   Sun, 26 Oct 2025 14:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:20:43 +0000   Sun, 26 Oct 2025 14:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:20:43 +0000   Sun, 26 Oct 2025 14:16:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-501661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                311e3ffa-44e3-4a34-9a3d-a90448f695e8
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     cloud-spanner-emulator-86bd5cbb97-rt9p8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  default                     hello-world-app-5d498dc89-fznqt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-2t2bm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  gcp-auth                    gcp-auth-78565c9fb4-vzzg7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-hrnwk    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m59s
	  kube-system                 coredns-66bc5c9577-5nrx2                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 csi-hostpathplugin-bdsts                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 etcd-addons-501661                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m10s
	  kube-system                 kindnet-wggwr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m5s
	  kube-system                 kube-apiserver-addons-501661                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-addons-501661        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-proxy-rxl4x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-addons-501661                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 metrics-server-85b7d694d7-ljcz5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m
	  kube-system                 nvidia-device-plugin-daemonset-j5x9f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 registry-6b586f9694-ndtxx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 registry-creds-764b6fb674-2fxp4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 registry-proxy-26bjw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 snapshot-controller-7d9fbc56b8-dbl7s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-hpxf6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  local-path-storage          local-path-provisioner-648f6765c9-4fmxv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bdtjs               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m4s                   kube-proxy       
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m18s (x9 over 5m18s)  kubelet          Node addons-501661 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node addons-501661 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node addons-501661 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m10s                  kubelet          Node addons-501661 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s                  kubelet          Node addons-501661 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s                  kubelet          Node addons-501661 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m6s                   node-controller  Node addons-501661 event: Registered Node addons-501661 in Controller
	  Normal   NodeReady                4m24s                  kubelet          Node addons-501661 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 13:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 14:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 14:15] overlayfs: idmapped layers are currently not supported
	[  +0.080342] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616] <==
	{"level":"warn","ts":"2025-10-26T14:15:32.849544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.851791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.882448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.908102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.941114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.977074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.996027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.029526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.061473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.089907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.141675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.154138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.179630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.207126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.232867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.268917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.294661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.329657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.473947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:49.942909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:49.958366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.458313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.472117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.492140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.506126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51122","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [9870f74d88fd1169c4c4d0ff6a14410d72f85aa111abcaf0941672d3c4531fdf] <==
	2025/10/26 14:17:36 GCP Auth Webhook started!
	2025/10/26 14:17:39 Ready to marshal response ...
	2025/10/26 14:17:39 Ready to write response ...
	2025/10/26 14:17:39 Ready to marshal response ...
	2025/10/26 14:17:39 Ready to write response ...
	2025/10/26 14:17:39 Ready to marshal response ...
	2025/10/26 14:17:39 Ready to write response ...
	2025/10/26 14:17:59 Ready to marshal response ...
	2025/10/26 14:17:59 Ready to write response ...
	2025/10/26 14:18:02 Ready to marshal response ...
	2025/10/26 14:18:02 Ready to write response ...
	2025/10/26 14:18:02 Ready to marshal response ...
	2025/10/26 14:18:02 Ready to write response ...
	2025/10/26 14:18:10 Ready to marshal response ...
	2025/10/26 14:18:10 Ready to write response ...
	2025/10/26 14:18:21 Ready to marshal response ...
	2025/10/26 14:18:21 Ready to write response ...
	2025/10/26 14:18:26 Ready to marshal response ...
	2025/10/26 14:18:26 Ready to write response ...
	2025/10/26 14:18:42 Ready to marshal response ...
	2025/10/26 14:18:42 Ready to write response ...
	2025/10/26 14:20:45 Ready to marshal response ...
	2025/10/26 14:20:45 Ready to write response ...
	
	
	==> kernel <==
	 14:20:47 up  4:03,  0 user,  load average: 0.39, 1.60, 2.61
	Linux addons-501661 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8] <==
	I1026 14:18:43.138937       1 main.go:301] handling current node
	I1026 14:18:53.138930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:18:53.138975       1 main.go:301] handling current node
	I1026 14:19:03.147533       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:19:03.147576       1 main.go:301] handling current node
	I1026 14:19:13.145370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:19:13.145404       1 main.go:301] handling current node
	I1026 14:19:23.145866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:19:23.145901       1 main.go:301] handling current node
	I1026 14:19:33.147486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:19:33.147519       1 main.go:301] handling current node
	I1026 14:19:43.138992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:19:43.139102       1 main.go:301] handling current node
	I1026 14:19:53.145772       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:19:53.145882       1 main.go:301] handling current node
	I1026 14:20:03.146332       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:20:03.146368       1 main.go:301] handling current node
	I1026 14:20:13.147877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:20:13.147908       1 main.go:301] handling current node
	I1026 14:20:23.141767       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:20:23.141801       1 main.go:301] handling current node
	I1026 14:20:33.139885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:20:33.139920       1 main.go:301] handling current node
	I1026 14:20:43.140948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:20:43.141055       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e] <==
	E1026 14:16:28.804017       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.806723       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.811658       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.833532       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.874840       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.957067       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:29.119017       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:29.439849       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	W1026 14:16:29.804329       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 14:16:29.804411       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:16:29.804503       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 14:16:29.804518       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1026 14:16:29.804435       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 14:16:29.805691       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 14:16:30.183951       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 14:17:49.113536       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58868: use of closed network connection
	E1026 14:17:49.473744       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58932: use of closed network connection
	I1026 14:18:26.246107       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 14:18:26.602113       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.84.22"}
	I1026 14:18:33.700881       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1026 14:18:35.568486       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1026 14:20:45.695426       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.94.225"}
	
	
	==> kube-controller-manager [90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5] <==
	I1026 14:15:41.456880       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 14:15:41.456948       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:15:41.456985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:41.464649       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-501661" podCIDRs=["10.244.0.0/24"]
	I1026 14:15:41.496032       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 14:15:41.496127       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 14:15:41.496205       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-501661"
	I1026 14:15:41.496398       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 14:15:41.496865       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 14:15:41.496247       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 14:15:41.497005       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:15:41.499396       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 14:15:41.500390       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 14:15:41.500533       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:15:41.500656       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:15:41.511535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1026 14:15:47.897092       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1026 14:16:11.446855       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 14:16:11.450978       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1026 14:16:11.528758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 14:16:11.528891       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 14:16:11.528946       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 14:16:11.629753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:16:11.651397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:16:26.541790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11] <==
	I1026 14:15:43.063411       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:15:43.314321       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:15:43.422209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:15:43.422255       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:15:43.422329       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:15:43.517632       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:15:43.517696       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:15:43.549215       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:15:43.549548       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:15:43.549563       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:15:43.553350       1 config.go:200] "Starting service config controller"
	I1026 14:15:43.553377       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:15:43.553395       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:15:43.553399       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:15:43.553411       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:15:43.553415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:15:43.554114       1 config.go:309] "Starting node config controller"
	I1026 14:15:43.554133       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:15:43.554140       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:15:43.653685       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:15:43.653725       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:15:43.653758       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1] <==
	E1026 14:15:34.876061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:15:34.876131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:15:34.876188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:15:34.876241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 14:15:34.876291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:15:34.876465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:15:34.876510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:15:34.876813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:15:34.876905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:15:34.878537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:15:34.878627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:15:34.878677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:15:34.878725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:15:34.878775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:15:34.878907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:15:34.878959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:15:34.879075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 14:15:35.795133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:15:35.795291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:15:35.814527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:15:35.830775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:15:35.888414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:15:35.900737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:15:35.900745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1026 14:15:36.462893       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:18:50 addons-501661 kubelet[1300]: I1026 14:18:50.851784    1300 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nl2zt\" (UniqueName: \"kubernetes.io/projected/4242e3f2-b404-4480-a98b-6b701154915d-kube-api-access-nl2zt\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:50 addons-501661 kubelet[1300]: I1026 14:18:50.851823    1300 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-63769dfa-94e2-4876-8297-9f7d912b0466\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ac81f7a0-b276-11f0-9ef0-22dcbc0e83e3\") on node \"addons-501661\" "
	Oct 26 14:18:50 addons-501661 kubelet[1300]: I1026 14:18:50.857379    1300 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-63769dfa-94e2-4876-8297-9f7d912b0466" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^ac81f7a0-b276-11f0-9ef0-22dcbc0e83e3") on node "addons-501661"
	Oct 26 14:18:50 addons-501661 kubelet[1300]: I1026 14:18:50.952754    1300 reconciler_common.go:299] "Volume detached for volume \"pvc-63769dfa-94e2-4876-8297-9f7d912b0466\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ac81f7a0-b276-11f0-9ef0-22dcbc0e83e3\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:51 addons-501661 kubelet[1300]: I1026 14:18:51.609944    1300 scope.go:117] "RemoveContainer" containerID="f8cda5aebb069cb7f9253b54576a35029fa86c14afd771e7297245c27c9c8757"
	Oct 26 14:18:53 addons-501661 kubelet[1300]: I1026 14:18:53.499950    1300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4242e3f2-b404-4480-a98b-6b701154915d" path="/var/lib/kubelet/pods/4242e3f2-b404-4480-a98b-6b701154915d/volumes"
	Oct 26 14:19:00 addons-501661 kubelet[1300]: I1026 14:19:00.497098    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-ndtxx" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:19:08 addons-501661 kubelet[1300]: I1026 14:19:08.496658    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j5x9f" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:19:27 addons-501661 kubelet[1300]: I1026 14:19:27.502313    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-26bjw" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:18 addons-501661 kubelet[1300]: I1026 14:20:18.496400    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-ndtxx" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:21 addons-501661 kubelet[1300]: I1026 14:20:21.496790    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j5x9f" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:34 addons-501661 kubelet[1300]: I1026 14:20:34.597579    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2fxp4" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:34 addons-501661 kubelet[1300]: W1026 14:20:34.627129    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/crio-fe684bf53836acccd69d9a8e2e634ccd17eea6ff36f428ecc1aaf6d7922f3ea9 WatchSource:0}: Error finding container fe684bf53836acccd69d9a8e2e634ccd17eea6ff36f428ecc1aaf6d7922f3ea9: Status 404 returned error can't find the container with id fe684bf53836acccd69d9a8e2e634ccd17eea6ff36f428ecc1aaf6d7922f3ea9
	Oct 26 14:20:36 addons-501661 kubelet[1300]: I1026 14:20:36.963568    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2fxp4" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:36 addons-501661 kubelet[1300]: I1026 14:20:36.964092    1300 scope.go:117] "RemoveContainer" containerID="bc5f53f161c58151217e5eac448928299391f3ca9d65c0b373d801fd8d128dfd"
	Oct 26 14:20:37 addons-501661 kubelet[1300]: E1026 14:20:37.608109    1300 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7a2d2b36b46a4197a033d3075eb7a227370dec722a092aac306f5d41b2a03ad1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7a2d2b36b46a4197a033d3075eb7a227370dec722a092aac306f5d41b2a03ad1/diff: no such file or directory, extraDiskErr: <nil>
	Oct 26 14:20:37 addons-501661 kubelet[1300]: I1026 14:20:37.659087    1300 scope.go:117] "RemoveContainer" containerID="bc5f53f161c58151217e5eac448928299391f3ca9d65c0b373d801fd8d128dfd"
	Oct 26 14:20:37 addons-501661 kubelet[1300]: I1026 14:20:37.968862    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2fxp4" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:37 addons-501661 kubelet[1300]: I1026 14:20:37.968918    1300 scope.go:117] "RemoveContainer" containerID="d5c30369b6ee08619e3976b0ffb498a0879daf2e461ea671cb8d22f339de6244"
	Oct 26 14:20:37 addons-501661 kubelet[1300]: E1026 14:20:37.969089    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2fxp4_kube-system(811c0810-16ef-4371-bf68-45470eb5ca98)\"" pod="kube-system/registry-creds-764b6fb674-2fxp4" podUID="811c0810-16ef-4371-bf68-45470eb5ca98"
	Oct 26 14:20:38 addons-501661 kubelet[1300]: I1026 14:20:38.972798    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2fxp4" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:38 addons-501661 kubelet[1300]: I1026 14:20:38.972878    1300 scope.go:117] "RemoveContainer" containerID="d5c30369b6ee08619e3976b0ffb498a0879daf2e461ea671cb8d22f339de6244"
	Oct 26 14:20:38 addons-501661 kubelet[1300]: E1026 14:20:38.973100    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2fxp4_kube-system(811c0810-16ef-4371-bf68-45470eb5ca98)\"" pod="kube-system/registry-creds-764b6fb674-2fxp4" podUID="811c0810-16ef-4371-bf68-45470eb5ca98"
	Oct 26 14:20:45 addons-501661 kubelet[1300]: I1026 14:20:45.665923    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6ed7f633-1859-4cfe-80da-0633ea377425-gcp-creds\") pod \"hello-world-app-5d498dc89-fznqt\" (UID: \"6ed7f633-1859-4cfe-80da-0633ea377425\") " pod="default/hello-world-app-5d498dc89-fznqt"
	Oct 26 14:20:45 addons-501661 kubelet[1300]: I1026 14:20:45.666457    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6pn\" (UniqueName: \"kubernetes.io/projected/6ed7f633-1859-4cfe-80da-0633ea377425-kube-api-access-ct6pn\") pod \"hello-world-app-5d498dc89-fznqt\" (UID: \"6ed7f633-1859-4cfe-80da-0633ea377425\") " pod="default/hello-world-app-5d498dc89-fznqt"
	
	
	==> storage-provisioner [ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df] <==
	W1026 14:20:23.599101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:25.603900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:25.608717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:27.612618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:27.619274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:29.622040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:29.629749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:31.632538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:31.637014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:33.640445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:33.644921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:35.648208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:35.658137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:37.660760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:37.668044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:39.671176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:39.675819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:41.678734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:41.683680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:43.686441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:43.694282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:45.718199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:45.725539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:47.728830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:20:47.741225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-501661 -n addons-501661
helpers_test.go:269: (dbg) Run:  kubectl --context addons-501661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-501661 describe pod ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-501661 describe pod ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf: exit status 1 (102.038959ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pptg4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qmxvf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-501661 describe pod ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (275.551706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:20:49.022124  725962 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:20:49.022943  725962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:20:49.022977  725962 out.go:374] Setting ErrFile to fd 2...
	I1026 14:20:49.022998  725962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:20:49.023311  725962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:20:49.023655  725962 mustload.go:65] Loading cluster: addons-501661
	I1026 14:20:49.024074  725962 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:20:49.024107  725962 addons.go:606] checking whether the cluster is paused
	I1026 14:20:49.024232  725962 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:20:49.024276  725962 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:20:49.024898  725962 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:20:49.043204  725962 ssh_runner.go:195] Run: systemctl --version
	I1026 14:20:49.043255  725962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:20:49.062306  725962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:20:49.167315  725962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:20:49.167409  725962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:20:49.196474  725962 cri.go:89] found id: "d5c30369b6ee08619e3976b0ffb498a0879daf2e461ea671cb8d22f339de6244"
	I1026 14:20:49.196519  725962 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:20:49.196525  725962 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:20:49.196529  725962 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:20:49.196533  725962 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:20:49.196537  725962 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:20:49.196540  725962 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:20:49.196543  725962 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:20:49.196563  725962 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:20:49.196576  725962 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:20:49.196581  725962 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:20:49.196584  725962 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:20:49.196587  725962 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:20:49.196590  725962 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:20:49.196594  725962 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:20:49.196602  725962 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:20:49.196612  725962 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:20:49.196617  725962 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:20:49.196620  725962 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:20:49.196623  725962 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:20:49.196638  725962 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:20:49.196646  725962 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:20:49.196649  725962 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:20:49.196652  725962 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:20:49.196655  725962 cri.go:89] found id: ""
	I1026 14:20:49.196731  725962 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:20:49.213529  725962 out.go:203] 
	W1026 14:20:49.217310  725962 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:20:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:20:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:20:49.217348  725962 out.go:285] * 
	* 
	W1026 14:20:49.223716  725962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:20:49.227734  725962 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable ingress --alsologtostderr -v=1: exit status 11 (256.898862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:20:49.285529  726006 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:20:49.286274  726006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:20:49.286287  726006 out.go:374] Setting ErrFile to fd 2...
	I1026 14:20:49.286291  726006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:20:49.286540  726006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:20:49.286835  726006 mustload.go:65] Loading cluster: addons-501661
	I1026 14:20:49.287199  726006 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:20:49.287217  726006 addons.go:606] checking whether the cluster is paused
	I1026 14:20:49.287315  726006 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:20:49.287329  726006 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:20:49.287760  726006 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:20:49.305544  726006 ssh_runner.go:195] Run: systemctl --version
	I1026 14:20:49.305612  726006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:20:49.323330  726006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:20:49.427241  726006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:20:49.427341  726006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:20:49.459562  726006 cri.go:89] found id: "d5c30369b6ee08619e3976b0ffb498a0879daf2e461ea671cb8d22f339de6244"
	I1026 14:20:49.459581  726006 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:20:49.459585  726006 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:20:49.459589  726006 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:20:49.459592  726006 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:20:49.459596  726006 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:20:49.459599  726006 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:20:49.459602  726006 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:20:49.459605  726006 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:20:49.459611  726006 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:20:49.459614  726006 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:20:49.459617  726006 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:20:49.459620  726006 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:20:49.459623  726006 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:20:49.459626  726006 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:20:49.459634  726006 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:20:49.459638  726006 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:20:49.459642  726006 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:20:49.459646  726006 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:20:49.459649  726006 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:20:49.459654  726006 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:20:49.459657  726006 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:20:49.459660  726006 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:20:49.459663  726006 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:20:49.459666  726006 cri.go:89] found id: ""
	I1026 14:20:49.459714  726006 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:20:49.474420  726006 out.go:203] 
	W1026 14:20:49.477317  726006 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:20:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:20:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:20:49.477345  726006 out.go:285] * 
	* 
	W1026 14:20:49.483693  726006 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:20:49.486464  726006 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-2t2bm" [bdc8e3e2-c51d-46c0-8a75-b10ab3dc556c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00429102s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (329.897178ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:25.623432  723869 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:25.624260  723869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:25.624273  723869 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:25.624278  723869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:25.624593  723869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:25.624932  723869 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:25.625325  723869 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:25.625342  723869 addons.go:606] checking whether the cluster is paused
	I1026 14:18:25.625450  723869 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:25.625460  723869 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:25.625906  723869 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:25.645444  723869 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:25.645507  723869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:25.670352  723869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:25.789352  723869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:25.789450  723869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:25.826211  723869 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:25.826237  723869 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:25.826242  723869 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:25.826246  723869 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:25.826249  723869 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:25.826253  723869 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:25.826256  723869 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:25.826259  723869 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:25.826262  723869 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:25.826270  723869 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:25.826273  723869 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:25.826276  723869 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:25.826280  723869 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:25.826283  723869 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:25.826287  723869 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:25.826297  723869 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:25.826308  723869 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:25.826312  723869 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:25.826321  723869 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:25.826324  723869 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:25.826329  723869 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:25.826334  723869 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:25.826346  723869 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:25.826349  723869 cri.go:89] found id: ""
	I1026 14:18:25.826402  723869 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:25.849042  723869 out.go:203] 
	W1026 14:18:25.852670  723869 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:25.852746  723869 out.go:285] * 
	* 
	W1026 14:18:25.859255  723869 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:25.863190  723869 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.619857ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003257355s
addons_test.go:463: (dbg) Run:  kubectl --context addons-501661 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (290.764934ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:19.297516  723722 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:19.298292  723722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:19.298330  723722 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:19.298354  723722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:19.298635  723722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:19.298988  723722 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:19.299392  723722 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:19.299436  723722 addons.go:606] checking whether the cluster is paused
	I1026 14:18:19.299561  723722 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:19.299593  723722 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:19.300127  723722 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:19.319979  723722 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:19.320052  723722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:19.358633  723722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:19.467638  723722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:19.467765  723722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:19.502059  723722 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:19.502080  723722 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:19.502085  723722 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:19.502089  723722 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:19.502092  723722 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:19.502095  723722 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:19.502098  723722 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:19.502101  723722 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:19.502104  723722 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:19.502113  723722 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:19.502117  723722 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:19.502120  723722 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:19.502123  723722 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:19.502126  723722 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:19.502130  723722 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:19.502136  723722 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:19.502143  723722 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:19.502159  723722 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:19.502166  723722 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:19.502170  723722 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:19.502175  723722 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:19.502178  723722 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:19.502181  723722 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:19.502185  723722 cri.go:89] found id: ""
	I1026 14:18:19.502237  723722 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:19.517506  723722 out.go:203] 
	W1026 14:18:19.520571  723722 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:19.520597  723722 out.go:285] * 
	* 
	W1026 14:18:19.527019  723722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:19.530111  723722 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 14:18:11.071018  715440 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 14:18:11.076273  715440 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 14:18:11.076299  715440 kapi.go:107] duration metric: took 5.294756ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.304775ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-501661 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-501661 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [40791363-f22b-425a-b709-3ba1c040ae24] Pending
helpers_test.go:352: "task-pv-pod" [40791363-f22b-425a-b709-3ba1c040ae24] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [40791363-f22b-425a-b709-3ba1c040ae24] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003412482s
addons_test.go:572: (dbg) Run:  kubectl --context addons-501661 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-501661 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-501661 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-501661 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-501661 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-501661 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-501661 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4242e3f2-b404-4480-a98b-6b701154915d] Pending
helpers_test.go:352: "task-pv-pod-restore" [4242e3f2-b404-4480-a98b-6b701154915d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4242e3f2-b404-4480-a98b-6b701154915d] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002909607s
addons_test.go:614: (dbg) Run:  kubectl --context addons-501661 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-501661 delete pod task-pv-pod-restore: (1.250927207s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-501661 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-501661 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (295.031269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:52.118363  724686 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:52.119200  724686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:52.119264  724686 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:52.119270  724686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:52.119552  724686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:52.119931  724686 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:52.120329  724686 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:52.120348  724686 addons.go:606] checking whether the cluster is paused
	I1026 14:18:52.120456  724686 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:52.120471  724686 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:52.121149  724686 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:52.139881  724686 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:52.139941  724686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:52.166764  724686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:52.281481  724686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:52.281596  724686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:52.324419  724686 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:52.324447  724686 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:52.324454  724686 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:52.324458  724686 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:52.324462  724686 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:52.324466  724686 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:52.324469  724686 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:52.324472  724686 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:52.324475  724686 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:52.324489  724686 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:52.324494  724686 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:52.324498  724686 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:52.324501  724686 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:52.324505  724686 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:52.324514  724686 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:52.324523  724686 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:52.324531  724686 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:52.324536  724686 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:52.324540  724686 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:52.324543  724686 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:52.324548  724686 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:52.324551  724686 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:52.324555  724686 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:52.324559  724686 cri.go:89] found id: ""
	I1026 14:18:52.324614  724686 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:52.340983  724686 out.go:203] 
	W1026 14:18:52.343912  724686 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:52.343949  724686 out.go:285] * 
	* 
	W1026 14:18:52.350488  724686 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:52.353307  724686 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (278.402319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:52.412627  724736 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:52.414648  724736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:52.414666  724736 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:52.414673  724736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:52.414953  724736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:52.415276  724736 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:52.415678  724736 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:52.415696  724736 addons.go:606] checking whether the cluster is paused
	I1026 14:18:52.415803  724736 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:52.415820  724736 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:52.416287  724736 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:52.436303  724736 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:52.436363  724736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:52.457778  724736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:52.564340  724736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:52.564445  724736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:52.602773  724736 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:52.602794  724736 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:52.602799  724736 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:52.602803  724736 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:52.602807  724736 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:52.602810  724736 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:52.602813  724736 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:52.602850  724736 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:52.602861  724736 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:52.602868  724736 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:52.602871  724736 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:52.602875  724736 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:52.602879  724736 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:52.602882  724736 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:52.602886  724736 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:52.602891  724736 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:52.602896  724736 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:52.602900  724736 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:52.602918  724736 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:52.602924  724736 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:52.602931  724736 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:52.602942  724736 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:52.602945  724736 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:52.602948  724736 cri.go:89] found id: ""
	I1026 14:18:52.603019  724736 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:52.620142  724736 out.go:203] 
	W1026 14:18:52.623048  724736 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:52.623086  724736 out.go:285] * 
	* 
	W1026 14:18:52.629480  724736 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:52.632765  724736 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-501661 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-501661 --alsologtostderr -v=1: exit status 11 (363.037033ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:10.318693  723043 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:10.321616  723043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:10.321641  723043 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:10.321647  723043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:10.321955  723043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:10.322282  723043 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:10.322662  723043 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:10.322679  723043 addons.go:606] checking whether the cluster is paused
	I1026 14:18:10.322782  723043 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:10.322799  723043 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:10.323392  723043 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:10.349372  723043 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:10.349431  723043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:10.401630  723043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:10.523528  723043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:10.523614  723043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:10.571615  723043 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:10.571639  723043 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:10.571644  723043 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:10.571648  723043 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:10.571651  723043 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:10.571655  723043 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:10.571658  723043 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:10.571661  723043 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:10.571664  723043 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:10.571670  723043 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:10.571674  723043 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:10.571677  723043 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:10.571681  723043 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:10.571684  723043 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:10.571687  723043 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:10.571696  723043 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:10.571699  723043 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:10.571704  723043 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:10.571707  723043 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:10.571710  723043 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:10.571715  723043 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:10.571722  723043 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:10.571725  723043 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:10.571728  723043 cri.go:89] found id: ""
	I1026 14:18:10.571781  723043 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:10.593169  723043 out.go:203] 
	W1026 14:18:10.595933  723043 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:10.595961  723043 out.go:285] * 
	* 
	W1026 14:18:10.602397  723043 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:10.605062  723043 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-501661 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-501661
helpers_test.go:243: (dbg) docker inspect addons-501661:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0",
	        "Created": "2025-10-26T14:15:07.120202821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 716600,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:15:07.183599693Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/hosts",
	        "LogPath": "/var/lib/docker/containers/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0-json.log",
	        "Name": "/addons-501661",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-501661:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-501661",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0",
	                "LowerDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c69045b4222743247451a3343956b81491f7f3fd188a1936a10666d37e1a138/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-501661",
	                "Source": "/var/lib/docker/volumes/addons-501661/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-501661",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-501661",
	                "name.minikube.sigs.k8s.io": "addons-501661",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5bb27cea89a452f483398fba8e83bcc93c8ff35f8316102106d0b8b312d75055",
	            "SandboxKey": "/var/run/docker/netns/5bb27cea89a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-501661": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:08:81:b2:90:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4884fd2f2ecc1d7b1c1eaa3c5ef8ef4f0bdc55395d7ff2fd12eca5ac47f857a9",
	                    "EndpointID": "9b5e65b9add35e1829b44a1c4ab90c4d8e7c0ffa03899b60e23ef68ab250fb09",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-501661",
	                        "33a58f25144b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-501661 -n addons-501661
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-501661 logs -n 25: (1.804901208s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-638833 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-638833   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-638833                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-638833   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ -o=json --download-only -p download-only-758046 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-758046   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-758046                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-758046   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-638833                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-638833   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-758046                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-758046   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p download-docker-958542 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-958542 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p download-docker-958542                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-958542 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p binary-mirror-069171 --alsologtostderr --binary-mirror http://127.0.0.1:38609 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-069171   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p binary-mirror-069171                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-069171   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ addons  │ disable dashboard -p addons-501661                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ addons  │ enable dashboard -p addons-501661                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ start   │ -p addons-501661 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:17 UTC │
	│ addons  │ addons-501661 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-501661 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-501661 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-501661 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-501661 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ addons-501661 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-501661 ssh cat /opt/local-path-provisioner/pvc-26a36dca-438b-4339-abca-53d25f00dbaf_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ enable headlamp -p addons-501661 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-501661 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-501661          │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:42.055233  716202 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:42.055382  716202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:42.055394  716202 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:42.055399  716202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:42.055724  716202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:14:42.056289  716202 out.go:368] Setting JSON to false
	I1026 14:14:42.057257  716202 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14234,"bootTime":1761473848,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:14:42.057460  716202 start.go:141] virtualization:  
	I1026 14:14:42.061131  716202 out.go:179] * [addons-501661] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 14:14:42.064173  716202 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:14:42.064245  716202 notify.go:220] Checking for updates...
	I1026 14:14:42.070226  716202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:42.073357  716202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:14:42.076929  716202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:14:42.079942  716202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 14:14:42.083137  716202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:14:42.086543  716202 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:42.120541  716202 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:14:42.120728  716202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:42.190403  716202 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 14:14:42.176559719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:42.190543  716202 docker.go:318] overlay module found
	I1026 14:14:42.194003  716202 out.go:179] * Using the docker driver based on user configuration
	I1026 14:14:42.197229  716202 start.go:305] selected driver: docker
	I1026 14:14:42.197265  716202 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:42.197282  716202 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:14:42.198204  716202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:42.264287  716202 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-26 14:14:42.25345366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:42.264458  716202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:42.264833  716202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:14:42.267814  716202 out.go:179] * Using Docker driver with root privileges
	I1026 14:14:42.270847  716202 cni.go:84] Creating CNI manager for ""
	I1026 14:14:42.270949  716202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:42.270960  716202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:42.271065  716202 start.go:349] cluster config:
	{Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 14:14:42.274434  716202 out.go:179] * Starting "addons-501661" primary control-plane node in "addons-501661" cluster
	I1026 14:14:42.277517  716202 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:42.280575  716202 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:42.283653  716202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:42.283778  716202 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 14:14:42.283796  716202 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:42.283699  716202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:42.283919  716202 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 14:14:42.283930  716202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:14:42.284363  716202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/config.json ...
	I1026 14:14:42.284457  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/config.json: {Name:mk6ffa79d382f43a49c9863fe564896f0de6493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:42.299975  716202 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:42.300149  716202 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:42.300173  716202 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 14:14:42.300182  716202 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 14:14:42.300190  716202 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 14:14:42.300201  716202 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 14:15:00.269195  716202 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 14:15:00.269269  716202 cache.go:232] Successfully downloaded all kic artifacts
	I1026 14:15:00.269306  716202 start.go:360] acquireMachinesLock for addons-501661: {Name:mk5c0728e792ff8d50e668fa90808b2014a3f87e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:15:00.269444  716202 start.go:364] duration metric: took 117.638µs to acquireMachinesLock for "addons-501661"
	I1026 14:15:00.269473  716202 start.go:93] Provisioning new machine with config: &{Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:00.269565  716202 start.go:125] createHost starting for "" (driver="docker")
	I1026 14:15:00.277930  716202 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 14:15:00.278222  716202 start.go:159] libmachine.API.Create for "addons-501661" (driver="docker")
	I1026 14:15:00.278271  716202 client.go:168] LocalClient.Create starting
	I1026 14:15:00.278444  716202 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 14:15:00.692305  716202 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 14:15:01.009911  716202 cli_runner.go:164] Run: docker network inspect addons-501661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 14:15:01.028660  716202 cli_runner.go:211] docker network inspect addons-501661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 14:15:01.028780  716202 network_create.go:284] running [docker network inspect addons-501661] to gather additional debugging logs...
	I1026 14:15:01.028803  716202 cli_runner.go:164] Run: docker network inspect addons-501661
	W1026 14:15:01.047016  716202 cli_runner.go:211] docker network inspect addons-501661 returned with exit code 1
	I1026 14:15:01.047056  716202 network_create.go:287] error running [docker network inspect addons-501661]: docker network inspect addons-501661: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-501661 not found
	I1026 14:15:01.047114  716202 network_create.go:289] output of [docker network inspect addons-501661]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-501661 not found
	
	** /stderr **
	I1026 14:15:01.047270  716202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:15:01.066000  716202 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6c5a0}
	I1026 14:15:01.066040  716202 network_create.go:124] attempt to create docker network addons-501661 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 14:15:01.066097  716202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-501661 addons-501661
	I1026 14:15:01.130861  716202 network_create.go:108] docker network addons-501661 192.168.49.0/24 created
	I1026 14:15:01.130895  716202 kic.go:121] calculated static IP "192.168.49.2" for the "addons-501661" container
	I1026 14:15:01.130987  716202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 14:15:01.153411  716202 cli_runner.go:164] Run: docker volume create addons-501661 --label name.minikube.sigs.k8s.io=addons-501661 --label created_by.minikube.sigs.k8s.io=true
	I1026 14:15:01.175241  716202 oci.go:103] Successfully created a docker volume addons-501661
	I1026 14:15:01.175403  716202 cli_runner.go:164] Run: docker run --rm --name addons-501661-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-501661 --entrypoint /usr/bin/test -v addons-501661:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 14:15:02.610772  716202 cli_runner.go:217] Completed: docker run --rm --name addons-501661-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-501661 --entrypoint /usr/bin/test -v addons-501661:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.43532288s)
	I1026 14:15:02.610832  716202 oci.go:107] Successfully prepared a docker volume addons-501661
	I1026 14:15:02.610876  716202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:02.610904  716202 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 14:15:02.610981  716202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-501661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 14:15:07.049506  716202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-501661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.438482193s)
	I1026 14:15:07.049539  716202 kic.go:203] duration metric: took 4.438632283s to extract preloaded images to volume ...
	W1026 14:15:07.049700  716202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 14:15:07.049835  716202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 14:15:07.104302  716202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-501661 --name addons-501661 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-501661 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-501661 --network addons-501661 --ip 192.168.49.2 --volume addons-501661:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 14:15:07.416804  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Running}}
	I1026 14:15:07.436894  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:07.457468  716202 cli_runner.go:164] Run: docker exec addons-501661 stat /var/lib/dpkg/alternatives/iptables
	I1026 14:15:07.509525  716202 oci.go:144] the created container "addons-501661" has a running status.
	I1026 14:15:07.509555  716202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa...
	I1026 14:15:07.961680  716202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 14:15:07.982119  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:08.000804  716202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 14:15:08.000828  716202 kic_runner.go:114] Args: [docker exec --privileged addons-501661 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 14:15:08.046819  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:08.065983  716202 machine.go:93] provisionDockerMachine start ...
	I1026 14:15:08.066115  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:08.085522  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:08.085917  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:08.085935  716202 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 14:15:08.086638  716202 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 14:15:11.240314  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-501661
	
	I1026 14:15:11.240337  716202 ubuntu.go:182] provisioning hostname "addons-501661"
	I1026 14:15:11.240399  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.257715  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:11.258045  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:11.258061  716202 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-501661 && echo "addons-501661" | sudo tee /etc/hostname
	I1026 14:15:11.413789  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-501661
	
	I1026 14:15:11.413894  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.432403  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:11.432739  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:11.432760  716202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-501661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-501661/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-501661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 14:15:11.580886  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:15:11.580914  716202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 14:15:11.580933  716202 ubuntu.go:190] setting up certificates
	I1026 14:15:11.580960  716202 provision.go:84] configureAuth start
	I1026 14:15:11.581030  716202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-501661
	I1026 14:15:11.599217  716202 provision.go:143] copyHostCerts
	I1026 14:15:11.599331  716202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 14:15:11.599551  716202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 14:15:11.599644  716202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 14:15:11.599721  716202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.addons-501661 san=[127.0.0.1 192.168.49.2 addons-501661 localhost minikube]
	I1026 14:15:11.789428  716202 provision.go:177] copyRemoteCerts
	I1026 14:15:11.789502  716202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 14:15:11.789544  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.807220  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:11.912316  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 14:15:11.929810  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 14:15:11.948586  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 14:15:11.965805  716202 provision.go:87] duration metric: took 384.815466ms to configureAuth
	I1026 14:15:11.965887  716202 ubuntu.go:206] setting minikube options for container-runtime
	I1026 14:15:11.966108  716202 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:11.966252  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:11.982978  716202 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:11.983310  716202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I1026 14:15:11.983331  716202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 14:15:12.241509  716202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 14:15:12.241535  716202 machine.go:96] duration metric: took 4.175521394s to provisionDockerMachine
	I1026 14:15:12.241545  716202 client.go:171] duration metric: took 11.963267545s to LocalClient.Create
	I1026 14:15:12.241556  716202 start.go:167] duration metric: took 11.963336928s to libmachine.API.Create "addons-501661"
	I1026 14:15:12.241564  716202 start.go:293] postStartSetup for "addons-501661" (driver="docker")
	I1026 14:15:12.241579  716202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 14:15:12.241642  716202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 14:15:12.241688  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.260095  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.365149  716202 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 14:15:12.368687  716202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 14:15:12.368744  716202 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 14:15:12.368756  716202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 14:15:12.368826  716202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 14:15:12.368854  716202 start.go:296] duration metric: took 127.285041ms for postStartSetup
	I1026 14:15:12.369182  716202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-501661
	I1026 14:15:12.386160  716202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/config.json ...
	I1026 14:15:12.386460  716202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:15:12.386509  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.403921  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.505954  716202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 14:15:12.510867  716202 start.go:128] duration metric: took 12.241284453s to createHost
	I1026 14:15:12.510890  716202 start.go:83] releasing machines lock for "addons-501661", held for 12.2414373s
	I1026 14:15:12.510971  716202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-501661
	I1026 14:15:12.527802  716202 ssh_runner.go:195] Run: cat /version.json
	I1026 14:15:12.527855  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.527887  716202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 14:15:12.527947  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:12.548368  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.571908  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:12.745974  716202 ssh_runner.go:195] Run: systemctl --version
	I1026 14:15:12.752629  716202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 14:15:12.789025  716202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 14:15:12.793547  716202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 14:15:12.793626  716202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 14:15:12.821725  716202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 14:15:12.821748  716202 start.go:495] detecting cgroup driver to use...
	I1026 14:15:12.821783  716202 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 14:15:12.821835  716202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 14:15:12.839056  716202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 14:15:12.852617  716202 docker.go:218] disabling cri-docker service (if available) ...
	I1026 14:15:12.852684  716202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 14:15:12.870809  716202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 14:15:12.890211  716202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 14:15:13.006585  716202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 14:15:13.126625  716202 docker.go:234] disabling docker service ...
	I1026 14:15:13.126729  716202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 14:15:13.148968  716202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 14:15:13.162507  716202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 14:15:13.280578  716202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 14:15:13.401911  716202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 14:15:13.415305  716202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 14:15:13.430707  716202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 14:15:13.430826  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.440743  716202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 14:15:13.440860  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.450970  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.460704  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.470294  716202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 14:15:13.478999  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.488399  716202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.502311  716202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:13.511952  716202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 14:15:13.519938  716202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 14:15:13.527330  716202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:13.636559  716202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 14:15:13.761926  716202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 14:15:13.762012  716202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 14:15:13.765985  716202 start.go:563] Will wait 60s for crictl version
	I1026 14:15:13.766050  716202 ssh_runner.go:195] Run: which crictl
	I1026 14:15:13.769918  716202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 14:15:13.798334  716202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 14:15:13.798434  716202 ssh_runner.go:195] Run: crio --version
	I1026 14:15:13.827855  716202 ssh_runner.go:195] Run: crio --version
	I1026 14:15:13.859562  716202 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 14:15:13.862275  716202 cli_runner.go:164] Run: docker network inspect addons-501661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:15:13.878673  716202 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 14:15:13.882751  716202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:15:13.893218  716202 kubeadm.go:883] updating cluster {Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 14:15:13.893352  716202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:13.893408  716202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:15:13.927258  716202 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:15:13.927285  716202 crio.go:433] Images already preloaded, skipping extraction
	I1026 14:15:13.927342  716202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:15:13.953521  716202 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:15:13.953548  716202 cache_images.go:85] Images are preloaded, skipping loading
	I1026 14:15:13.953556  716202 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 14:15:13.953707  716202 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-501661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 14:15:13.953820  716202 ssh_runner.go:195] Run: crio config
	I1026 14:15:14.009658  716202 cni.go:84] Creating CNI manager for ""
	I1026 14:15:14.009688  716202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:15:14.009737  716202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 14:15:14.009774  716202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-501661 NodeName:addons-501661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 14:15:14.009932  716202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-501661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 14:15:14.010021  716202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 14:15:14.018937  716202 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 14:15:14.019015  716202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 14:15:14.029910  716202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 14:15:14.044376  716202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 14:15:14.058983  716202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1026 14:15:14.072864  716202 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 14:15:14.076802  716202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:15:14.087360  716202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:14.197726  716202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:14.215714  716202 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661 for IP: 192.168.49.2
	I1026 14:15:14.215775  716202 certs.go:195] generating shared ca certs ...
	I1026 14:15:14.215816  716202 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:14.215996  716202 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 14:15:14.779290  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt ...
	I1026 14:15:14.779327  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt: {Name:mk2c0e7a4e6d1fe9d266ab325b3b3bd561912232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:14.779525  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key ...
	I1026 14:15:14.779538  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key: {Name:mk02d44b794c6056a853f955e32c6a8c5904be50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:14.780533  716202 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 14:15:15.952853  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt ...
	I1026 14:15:15.952886  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt: {Name:mkbbf0e37788070513f9effbcb8e28c9fecaefd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:15.953707  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key ...
	I1026 14:15:15.953728  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key: {Name:mk57df68f99156273f52c3d63d326f096df7d363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:15.954333  716202 certs.go:257] generating profile certs ...
	I1026 14:15:15.954398  716202 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.key
	I1026 14:15:15.954416  716202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt with IP's: []
	I1026 14:15:16.494349  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt ...
	I1026 14:15:16.494387  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: {Name:mk05e07203e8ab24bc5dd6dfb5d764b97f63a6ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:16.494559  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.key ...
	I1026 14:15:16.494572  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.key: {Name:mkf37d9c45f7269fb2b9d04391fe254c04b2102f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:16.494652  716202 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce
	I1026 14:15:16.494675  716202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 14:15:18.064255  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce ...
	I1026 14:15:18.064289  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce: {Name:mkf79e3703563e5002acb2e92656927338d6c675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.065102  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce ...
	I1026 14:15:18.065124  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce: {Name:mk5cba8786b83520a32546a1da36527afa06864d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.065224  716202 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt.a029b9ce -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt
	I1026 14:15:18.065310  716202 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key.a029b9ce -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key
	I1026 14:15:18.065366  716202 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key
	I1026 14:15:18.065387  716202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt with IP's: []
	I1026 14:15:18.584573  716202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt ...
	I1026 14:15:18.584606  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt: {Name:mk995d0661ebf3dd1e98494e769b493197ac7fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.584802  716202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key ...
	I1026 14:15:18.584823  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key: {Name:mk2e4c388b4bf0fa4afee8eb80584493d5022993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:18.585023  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 14:15:18.585066  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 14:15:18.585090  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 14:15:18.585119  716202 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 14:15:18.585748  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 14:15:18.605838  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 14:15:18.625037  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 14:15:18.643781  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 14:15:18.662808  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 14:15:18.681001  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 14:15:18.698733  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 14:15:18.715992  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 14:15:18.734119  716202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 14:15:18.755458  716202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 14:15:18.770028  716202 ssh_runner.go:195] Run: openssl version
	I1026 14:15:18.776631  716202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 14:15:18.786381  716202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:15:18.790413  716202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:15:18.790477  716202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:15:18.833779  716202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 14:15:18.842215  716202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 14:15:18.845734  716202 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 14:15:18.845785  716202 kubeadm.go:400] StartCluster: {Name:addons-501661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-501661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:15:18.845863  716202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:15:18.845928  716202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:15:18.873758  716202 cri.go:89] found id: ""
	I1026 14:15:18.873909  716202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 14:15:18.881728  716202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 14:15:18.889624  716202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 14:15:18.889691  716202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 14:15:18.897763  716202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 14:15:18.897785  716202 kubeadm.go:157] found existing configuration files:
	
	I1026 14:15:18.897840  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 14:15:18.905755  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 14:15:18.905868  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 14:15:18.913465  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 14:15:18.921229  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 14:15:18.921330  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 14:15:18.928863  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 14:15:18.936853  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 14:15:18.936957  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 14:15:18.944357  716202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 14:15:18.952125  716202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 14:15:18.952239  716202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 14:15:18.959819  716202 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 14:15:19.002409  716202 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 14:15:19.002505  716202 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 14:15:19.032914  716202 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 14:15:19.032998  716202 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 14:15:19.033045  716202 kubeadm.go:318] OS: Linux
	I1026 14:15:19.033103  716202 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 14:15:19.033156  716202 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 14:15:19.033213  716202 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 14:15:19.033273  716202 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 14:15:19.033333  716202 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 14:15:19.033387  716202 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 14:15:19.033441  716202 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 14:15:19.033505  716202 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 14:15:19.033569  716202 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 14:15:19.105822  716202 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 14:15:19.105949  716202 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 14:15:19.106051  716202 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 14:15:19.120973  716202 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 14:15:19.127053  716202 out.go:252]   - Generating certificates and keys ...
	I1026 14:15:19.127177  716202 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 14:15:19.127269  716202 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 14:15:19.401969  716202 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 14:15:20.738268  716202 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 14:15:21.499039  716202 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 14:15:22.504830  716202 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 14:15:22.866126  716202 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 14:15:22.866472  716202 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-501661 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:15:23.357362  716202 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 14:15:23.357723  716202 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-501661 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:15:24.121270  716202 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 14:15:24.622975  716202 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 14:15:25.331633  716202 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 14:15:25.331989  716202 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 14:15:25.951164  716202 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 14:15:26.627951  716202 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 14:15:27.204678  716202 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 14:15:27.689972  716202 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 14:15:28.054610  716202 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 14:15:28.055251  716202 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 14:15:28.058055  716202 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 14:15:28.061559  716202 out.go:252]   - Booting up control plane ...
	I1026 14:15:28.061667  716202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 14:15:28.061746  716202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 14:15:28.061817  716202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 14:15:28.077919  716202 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 14:15:28.078055  716202 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 14:15:28.088555  716202 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 14:15:28.088682  716202 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 14:15:28.088779  716202 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 14:15:28.217537  716202 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 14:15:28.217659  716202 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 14:15:29.718791  716202 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501705386s
	I1026 14:15:29.722409  716202 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 14:15:29.722508  716202 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 14:15:29.722603  716202 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 14:15:29.722685  716202 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 14:15:33.956459  716202 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.233577585s
	I1026 14:15:34.870155  716202 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.147740541s
	I1026 14:15:36.725419  716202 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002903362s
	I1026 14:15:36.749105  716202 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 14:15:36.762280  716202 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 14:15:36.777260  716202 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 14:15:36.777577  716202 kubeadm.go:318] [mark-control-plane] Marking the node addons-501661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 14:15:36.789014  716202 kubeadm.go:318] [bootstrap-token] Using token: p427n9.8rczgd3nf4ylhnbd
	I1026 14:15:36.792158  716202 out.go:252]   - Configuring RBAC rules ...
	I1026 14:15:36.792294  716202 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 14:15:36.798811  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 14:15:36.807471  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 14:15:36.811915  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 14:15:36.817593  716202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 14:15:36.821850  716202 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 14:15:37.132019  716202 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 14:15:37.566118  716202 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 14:15:38.132342  716202 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 14:15:38.133586  716202 kubeadm.go:318] 
	I1026 14:15:38.133665  716202 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 14:15:38.133679  716202 kubeadm.go:318] 
	I1026 14:15:38.133761  716202 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 14:15:38.133786  716202 kubeadm.go:318] 
	I1026 14:15:38.133816  716202 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 14:15:38.133882  716202 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 14:15:38.133939  716202 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 14:15:38.133948  716202 kubeadm.go:318] 
	I1026 14:15:38.134005  716202 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 14:15:38.134014  716202 kubeadm.go:318] 
	I1026 14:15:38.134064  716202 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 14:15:38.134073  716202 kubeadm.go:318] 
	I1026 14:15:38.134128  716202 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 14:15:38.134210  716202 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 14:15:38.134286  716202 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 14:15:38.134294  716202 kubeadm.go:318] 
	I1026 14:15:38.134383  716202 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 14:15:38.134468  716202 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 14:15:38.134476  716202 kubeadm.go:318] 
	I1026 14:15:38.134564  716202 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token p427n9.8rczgd3nf4ylhnbd \
	I1026 14:15:38.134676  716202 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 14:15:38.134704  716202 kubeadm.go:318] 	--control-plane 
	I1026 14:15:38.134713  716202 kubeadm.go:318] 
	I1026 14:15:38.134802  716202 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 14:15:38.134810  716202 kubeadm.go:318] 
	I1026 14:15:38.134896  716202 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token p427n9.8rczgd3nf4ylhnbd \
	I1026 14:15:38.135007  716202 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 14:15:38.137764  716202 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 14:15:38.138002  716202 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 14:15:38.138115  716202 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 14:15:38.138134  716202 cni.go:84] Creating CNI manager for ""
	I1026 14:15:38.138142  716202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:15:38.141234  716202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 14:15:38.144134  716202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 14:15:38.148873  716202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 14:15:38.148943  716202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 14:15:38.163122  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 14:15:38.449130  716202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 14:15:38.449269  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:38.449354  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-501661 minikube.k8s.io/updated_at=2025_10_26T14_15_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=addons-501661 minikube.k8s.io/primary=true
	I1026 14:15:38.469054  716202 ops.go:34] apiserver oom_adj: -16
	I1026 14:15:38.601307  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:39.101891  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:39.601356  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:40.102220  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:40.601638  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:41.102080  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:41.602098  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:42.101589  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:42.602154  716202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:42.752586  716202 kubeadm.go:1113] duration metric: took 4.303374534s to wait for elevateKubeSystemPrivileges
	I1026 14:15:42.752613  716202 kubeadm.go:402] duration metric: took 23.90683214s to StartCluster
	I1026 14:15:42.752630  716202 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:42.752742  716202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:15:42.753160  716202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:42.753895  716202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 14:15:42.753929  716202 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:42.754164  716202 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:42.754203  716202 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 14:15:42.754290  716202 addons.go:69] Setting yakd=true in profile "addons-501661"
	I1026 14:15:42.754316  716202 addons.go:238] Setting addon yakd=true in "addons-501661"
	I1026 14:15:42.754345  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.754404  716202 addons.go:69] Setting inspektor-gadget=true in profile "addons-501661"
	I1026 14:15:42.754430  716202 addons.go:238] Setting addon inspektor-gadget=true in "addons-501661"
	I1026 14:15:42.754476  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.754797  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.755005  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.755367  716202 addons.go:69] Setting metrics-server=true in profile "addons-501661"
	I1026 14:15:42.755401  716202 addons.go:238] Setting addon metrics-server=true in "addons-501661"
	I1026 14:15:42.755426  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.755845  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.756398  716202 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-501661"
	I1026 14:15:42.756421  716202 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-501661"
	I1026 14:15:42.756457  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.756915  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.759216  716202 addons.go:69] Setting cloud-spanner=true in profile "addons-501661"
	I1026 14:15:42.759251  716202 addons.go:238] Setting addon cloud-spanner=true in "addons-501661"
	I1026 14:15:42.759319  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.760016  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.761536  716202 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-501661"
	I1026 14:15:42.766540  716202 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-501661"
	I1026 14:15:42.766591  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.767073  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.783233  716202 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-501661"
	I1026 14:15:42.783302  716202 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-501661"
	I1026 14:15:42.783338  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.783829  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766050  716202 addons.go:69] Setting registry=true in profile "addons-501661"
	I1026 14:15:42.784000  716202 addons.go:238] Setting addon registry=true in "addons-501661"
	I1026 14:15:42.784023  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.784423  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766069  716202 addons.go:69] Setting registry-creds=true in profile "addons-501661"
	I1026 14:15:42.789198  716202 addons.go:238] Setting addon registry-creds=true in "addons-501661"
	I1026 14:15:42.789252  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.789709  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766080  716202 addons.go:69] Setting storage-provisioner=true in profile "addons-501661"
	I1026 14:15:42.826857  716202 addons.go:238] Setting addon storage-provisioner=true in "addons-501661"
	I1026 14:15:42.826937  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.827664  716202 addons.go:69] Setting default-storageclass=true in profile "addons-501661"
	I1026 14:15:42.827723  716202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-501661"
	I1026 14:15:42.828123  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766086  716202 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-501661"
	I1026 14:15:42.836155  716202 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-501661"
	I1026 14:15:42.842532  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766096  716202 addons.go:69] Setting volcano=true in profile "addons-501661"
	I1026 14:15:42.861448  716202 addons.go:238] Setting addon volcano=true in "addons-501661"
	I1026 14:15:42.861513  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.862063  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.864828  716202 addons.go:69] Setting gcp-auth=true in profile "addons-501661"
	I1026 14:15:42.864909  716202 mustload.go:65] Loading cluster: addons-501661
	I1026 14:15:42.865154  716202 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:42.865457  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766102  716202 addons.go:69] Setting volumesnapshots=true in profile "addons-501661"
	I1026 14:15:42.881002  716202 addons.go:238] Setting addon volumesnapshots=true in "addons-501661"
	I1026 14:15:42.881077  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.881601  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.766335  716202 out.go:179] * Verifying Kubernetes components...
	I1026 14:15:42.887031  716202 addons.go:69] Setting ingress=true in profile "addons-501661"
	I1026 14:15:42.887057  716202 addons.go:238] Setting addon ingress=true in "addons-501661"
	I1026 14:15:42.887110  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.887644  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.892820  716202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:42.908290  716202 addons.go:69] Setting ingress-dns=true in profile "addons-501661"
	I1026 14:15:42.908322  716202 addons.go:238] Setting addon ingress-dns=true in "addons-501661"
	I1026 14:15:42.908467  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:42.909143  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:42.931172  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:43.000001  716202 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 14:15:43.003291  716202 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 14:15:43.004071  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 14:15:43.004168  716202 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 14:15:43.008944  716202 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:43.009020  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 14:15:43.009110  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.012300  716202 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:43.012724  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 14:15:43.012908  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.033243  716202 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 14:15:43.058941  716202 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 14:15:43.059095  716202 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 14:15:43.064932  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	W1026 14:15:43.065305  716202 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 14:15:43.071814  716202 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-501661"
	I1026 14:15:43.078766  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:43.079239  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:43.085105  716202 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 14:15:43.090917  716202 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 14:15:43.091623  716202 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 14:15:43.091684  716202 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 14:15:43.091790  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 14:15:43.091887  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.093005  716202 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:43.093021  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 14:15:43.093078  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.071922  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 14:15:43.095971  716202 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 14:15:43.096036  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.099535  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:43.101631  716202 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:43.101686  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 14:15:43.101774  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.071929  716202 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 14:15:43.103412  716202 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 14:15:43.107054  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 14:15:43.107142  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.108335  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 14:15:43.113641  716202 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 14:15:43.113878  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 14:15:43.113947  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 14:15:43.113970  716202 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 14:15:43.114060  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.116639  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 14:15:43.116806  716202 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:43.116819  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 14:15:43.116885  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.150284  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 14:15:43.150308  716202 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 14:15:43.150372  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.185848  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 14:15:43.185936  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 14:15:43.192822  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 14:15:43.195682  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:43.198572  716202 addons.go:238] Setting addon default-storageclass=true in "addons-501661"
	I1026 14:15:43.199466  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:43.199914  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:43.204862  716202 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 14:15:43.205226  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.206313  716202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 14:15:43.205498  716202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 14:15:43.208528  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:43.208655  716202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:43.208666  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 14:15:43.208752  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.233662  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 14:15:43.233693  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 14:15:43.233768  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.255123  716202 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:43.255144  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 14:15:43.255207  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.260530  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.276322  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.289699  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.362310  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.367196  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.381098  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.386342  716202 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 14:15:43.389139  716202 out.go:179]   - Using image docker.io/busybox:stable
	I1026 14:15:43.395219  716202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:43.395242  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 14:15:43.395307  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:43.395530  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.396316  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.397299  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.436178  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.443164  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.449868  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.452110  716202 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:43.452129  716202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 14:15:43.452201  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	W1026 14:15:43.459443  716202 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:43.459542  716202 retry.go:31] will retry after 180.001402ms: ssh: handshake failed: EOF
	I1026 14:15:43.474657  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:43.496319  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	W1026 14:15:43.497933  716202 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:43.497960  716202 retry.go:31] will retry after 219.050644ms: ssh: handshake failed: EOF
	I1026 14:15:43.533129  716202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:43.851475  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:43.921024  716202 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:43.921059  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 14:15:44.032982  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:44.088331  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 14:15:44.088358  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 14:15:44.097809  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:44.134051  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:44.141751  716202 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 14:15:44.141818  716202 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 14:15:44.186408  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 14:15:44.186472  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 14:15:44.200180  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:44.276353  716202 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 14:15:44.276419  716202 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 14:15:44.283692  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:44.298954  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:44.302066  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 14:15:44.302135  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 14:15:44.314909  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:44.317038  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 14:15:44.317101  716202 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 14:15:44.319253  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:44.347497  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:44.367298  716202 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:44.367367  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 14:15:44.380613  716202 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 14:15:44.380687  716202 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 14:15:44.462553  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 14:15:44.462626  716202 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 14:15:44.524217  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:44.525432  716202 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 14:15:44.525488  716202 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 14:15:44.545769  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 14:15:44.545849  716202 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 14:15:44.570403  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 14:15:44.570491  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 14:15:44.714114  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 14:15:44.714185  716202 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 14:15:44.734172  716202 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:44.734237  716202 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 14:15:44.735463  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 14:15:44.735519  716202 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 14:15:44.817930  716202 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:44.818005  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 14:15:44.851520  716202 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.644790962s)
	I1026 14:15:44.851686  716202 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 14:15:44.851615  716202 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318461901s)
	I1026 14:15:44.851643  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.000136664s)
	I1026 14:15:44.853456  716202 node_ready.go:35] waiting up to 6m0s for node "addons-501661" to be "Ready" ...
	I1026 14:15:44.853720  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 14:15:44.853736  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 14:15:44.908892  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:45.013769  716202 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:45.013863  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 14:15:45.022658  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:45.133934  716202 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 14:15:45.134021  716202 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 14:15:45.262121  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:45.368529  716202 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-501661" context rescaled to 1 replicas
	I1026 14:15:45.435479  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 14:15:45.435544  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 14:15:45.567168  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 14:15:45.567231  716202 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 14:15:45.790584  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 14:15:45.790659  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 14:15:46.018343  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 14:15:46.018417  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 14:15:46.245768  716202 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:46.245796  716202 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 14:15:46.449259  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1026 14:15:46.886479  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:47.503487  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.470461052s)
	W1026 14:15:47.503529  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:47.503548  716202 retry.go:31] will retry after 229.260003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:47.503616  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.405781778s)
	I1026 14:15:47.503672  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.369544623s)
	I1026 14:15:47.733353  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:47.866124  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.665859479s)
	I1026 14:15:47.866199  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.582444968s)
	I1026 14:15:47.866430  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.567410875s)
	I1026 14:15:47.866485  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.551496838s)
	I1026 14:15:47.866540  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.547227878s)
	W1026 14:15:47.982361  716202 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1026 14:15:49.205258  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.680954568s)
	I1026 14:15:49.205296  716202 addons.go:479] Verifying addon registry=true in "addons-501661"
	I1026 14:15:49.205557  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.296634746s)
	I1026 14:15:49.205575  716202 addons.go:479] Verifying addon metrics-server=true in "addons-501661"
	I1026 14:15:49.205633  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.182900296s)
	I1026 14:15:49.205700  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.858118778s)
	I1026 14:15:49.205719  716202 addons.go:479] Verifying addon ingress=true in "addons-501661"
	I1026 14:15:49.205894  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.943682604s)
	W1026 14:15:49.206522  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:49.206553  716202 retry.go:31] will retry after 344.813051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:49.208816  716202 out.go:179] * Verifying registry addon...
	I1026 14:15:49.208953  716202 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-501661 service yakd-dashboard -n yakd-dashboard
	
	I1026 14:15:49.208974  716202 out.go:179] * Verifying ingress addon...
	I1026 14:15:49.212572  716202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 14:15:49.218382  716202 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 14:15:49.246224  716202 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:49.246251  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.246829  716202 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 14:15:49.246847  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:49.366801  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:49.551927  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:49.732388  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.744140  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.844072  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.394765662s)
	I1026 14:15:49.844160  716202 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-501661"
	I1026 14:15:49.844402  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.11100797s)
	W1026 14:15:49.844451  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:49.844558  716202 retry.go:31] will retry after 213.482896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:49.847673  716202 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 14:15:49.851536  716202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 14:15:49.869315  716202 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:49.869388  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.058871  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:50.217840  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.225549  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.356611  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.710037  716202 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 14:15:50.710142  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:50.729170  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.729235  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.735346  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:50.855156  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.865522  716202 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 14:15:50.882325  716202 addons.go:238] Setting addon gcp-auth=true in "addons-501661"
	I1026 14:15:50.882377  716202 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:15:50.882823  716202 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:15:50.901076  716202 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 14:15:50.901136  716202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:15:50.925841  716202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:15:51.216438  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.228607  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.355613  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.716060  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.722103  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.855182  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:51.857325  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:52.216822  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.222170  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.329122  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.777147292s)
	I1026 14:15:52.329217  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.270298507s)
	W1026 14:15:52.329243  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:52.329261  716202 retry.go:31] will retry after 516.72397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:52.329298  716202 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.42819996s)
	I1026 14:15:52.332269  716202 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:52.335131  716202 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 14:15:52.337953  716202 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 14:15:52.337972  716202 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 14:15:52.351647  716202 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 14:15:52.351934  716202 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 14:15:52.355889  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.367979  716202 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:52.368003  716202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 14:15:52.381717  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:52.719876  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.786164  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.846900  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:52.884382  716202 addons.go:479] Verifying addon gcp-auth=true in "addons-501661"
	I1026 14:15:52.885942  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.887644  716202 out.go:179] * Verifying gcp-auth addon...
	I1026 14:15:52.891303  716202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 14:15:52.901785  716202 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 14:15:52.901805  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.216352  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.221421  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.355731  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.394834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:53.701731  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:53.701765  716202 retry.go:31] will retry after 707.370273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:53.716173  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.722414  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.856340  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.895250  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.216139  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.225672  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.354809  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:54.356934  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:54.394812  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.410182  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:54.715814  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.722606  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.856663  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.894168  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.216409  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.227433  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:55.240380  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:55.240410  716202 retry.go:31] will retry after 1.206291057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:55.355621  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.394630  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.715785  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.721722  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.854916  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.894809  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.216036  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.222902  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.355157  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:56.357438  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:56.395140  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.447223  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:56.715834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.722204  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.858074  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.895229  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.216556  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.225293  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:57.260166  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:57.260196  716202 retry.go:31] will retry after 1.06760712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:57.355325  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.394090  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.716803  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.721691  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.854262  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.894246  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.215348  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.229836  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.328214  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:58.357513  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:15:58.358437  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.395050  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.715980  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.722207  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.856444  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.894018  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:59.138895  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:59.138941  716202 retry.go:31] will retry after 2.453558555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:59.215579  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.221421  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.355665  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.394558  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.715911  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.721783  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.854590  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.894609  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.282485  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.283118  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.359816  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:00.359800  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:00.395070  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.716352  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.722398  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.855288  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.894995  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.216348  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.228345  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.355373  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.394008  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.593343  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:01.716260  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.722487  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.856794  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.895069  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.215676  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.221913  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.356918  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.395647  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:02.407365  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:02.407398  716202 retry.go:31] will retry after 5.142719519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:02.716305  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.722070  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.855059  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:02.856973  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:02.894674  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.216133  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.222509  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.354893  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.394138  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.717162  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.722114  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.854999  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.894824  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.215684  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.227336  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.356516  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.394317  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.716534  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.721651  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.854342  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.895332  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.215986  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.225857  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.354799  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:05.357328  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:05.394282  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.715835  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.721545  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.855125  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.894877  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.216117  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.226487  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.354920  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.407876  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.716218  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.721997  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.855210  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.895265  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.216490  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.222638  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.355716  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.394528  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.550749  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:07.716732  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.727931  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.856188  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:07.859660  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:07.895108  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.216095  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.221988  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.358225  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:08.376075  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:08.376164  716202 retry.go:31] will retry after 6.878255973s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:08.394800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.715997  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.721734  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.854575  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.894626  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.215754  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.227692  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.354637  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.395532  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.716100  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.722401  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.855129  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.894525  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.215444  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.225624  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.355750  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:10.358365  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:10.394968  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.715562  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.721134  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.854989  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.895018  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.216198  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.222698  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.354519  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.394343  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.715947  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.721908  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.855953  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.894800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.215933  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.221780  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.354518  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.394697  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.715495  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.721322  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.855429  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:12.856315  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:12.894264  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.215683  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.221856  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.354989  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.395230  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.716250  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.722334  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.855519  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.894584  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.215299  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.222131  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.354958  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.394608  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.715845  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.721680  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.854461  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:14.856778  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:14.894774  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.216076  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:15.222591  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.254921  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:15.356006  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.394111  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.716328  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:15.722042  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.860196  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.895490  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:16.093097  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:16.093133  716202 retry.go:31] will retry after 10.749955074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:16.215888  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:16.225612  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.356255  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.395470  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.715515  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:16.721150  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.854982  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:16.857176  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:16.895064  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.216664  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:17.227619  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.354661  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.395273  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.715447  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:17.722217  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.855165  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.894289  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.216346  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:18.222187  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.355771  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.394982  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.716423  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:18.721455  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.855569  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:18.857714  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:18.894570  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.215831  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:19.223789  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.354912  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.394667  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.715571  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:19.721423  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.855346  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.894397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:20.215561  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:20.229153  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.354871  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.394793  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:20.716164  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:20.722050  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.855847  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.894918  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:21.215798  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:21.222059  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.355670  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:21.356114  716202 node_ready.go:57] node "addons-501661" has "Ready":"False" status (will retry)
	I1026 14:16:21.394993  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:21.716408  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:21.722136  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.856689  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.894898  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:22.216080  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:22.225724  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.354882  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.394260  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:22.716447  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:22.721384  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.856198  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.894926  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:23.216001  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:23.225696  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.371888  716202 node_ready.go:49] node "addons-501661" is "Ready"
	I1026 14:16:23.371918  716202 node_ready.go:38] duration metric: took 38.518266051s for node "addons-501661" to be "Ready" ...
	I1026 14:16:23.371933  716202 api_server.go:52] waiting for apiserver process to appear ...
	I1026 14:16:23.372014  716202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:16:23.396671  716202 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:16:23.396727  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.410186  716202 api_server.go:72] duration metric: took 40.656211973s to wait for apiserver process to appear ...
	I1026 14:16:23.410213  716202 api_server.go:88] waiting for apiserver healthz status ...
	I1026 14:16:23.410232  716202 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 14:16:23.434616  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:23.440772  716202 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 14:16:23.447160  716202 api_server.go:141] control plane version: v1.34.1
	I1026 14:16:23.447206  716202 api_server.go:131] duration metric: took 36.975658ms to wait for apiserver health ...
	I1026 14:16:23.447232  716202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 14:16:23.477172  716202 system_pods.go:59] 19 kube-system pods found
	I1026 14:16:23.477208  716202 system_pods.go:61] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending
	I1026 14:16:23.477241  716202 system_pods.go:61] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending
	I1026 14:16:23.477253  716202 system_pods.go:61] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending
	I1026 14:16:23.477260  716202 system_pods.go:61] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending
	I1026 14:16:23.477265  716202 system_pods.go:61] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:23.477269  716202 system_pods.go:61] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:23.477274  716202 system_pods.go:61] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:23.477303  716202 system_pods.go:61] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:23.477314  716202 system_pods.go:61] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending
	I1026 14:16:23.477319  716202 system_pods.go:61] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:23.477324  716202 system_pods.go:61] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:23.477343  716202 system_pods.go:61] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:23.477355  716202 system_pods.go:61] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending
	I1026 14:16:23.477361  716202 system_pods.go:61] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending
	I1026 14:16:23.477378  716202 system_pods.go:61] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending
	I1026 14:16:23.477390  716202 system_pods.go:61] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending
	I1026 14:16:23.477396  716202 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending
	I1026 14:16:23.477411  716202 system_pods.go:61] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending
	I1026 14:16:23.477424  716202 system_pods.go:61] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending
	I1026 14:16:23.477430  716202 system_pods.go:74] duration metric: took 30.184012ms to wait for pod list to return data ...
	I1026 14:16:23.477455  716202 default_sa.go:34] waiting for default service account to be created ...
	I1026 14:16:23.506250  716202 default_sa.go:45] found service account: "default"
	I1026 14:16:23.506279  716202 default_sa.go:55] duration metric: took 28.812458ms for default service account to be created ...
	I1026 14:16:23.506299  716202 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 14:16:23.514922  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:23.514954  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending
	I1026 14:16:23.514960  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending
	I1026 14:16:23.514965  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending
	I1026 14:16:23.514969  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending
	I1026 14:16:23.514972  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:23.515009  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:23.515019  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:23.515024  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:23.515028  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending
	I1026 14:16:23.515032  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:23.515041  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:23.515051  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:23.515058  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending
	I1026 14:16:23.515087  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending
	I1026 14:16:23.515091  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending
	I1026 14:16:23.515106  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending
	I1026 14:16:23.515119  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending
	I1026 14:16:23.515123  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending
	I1026 14:16:23.515127  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending
	I1026 14:16:23.515159  716202 retry.go:31] will retry after 286.658676ms: missing components: kube-dns
	I1026 14:16:23.782747  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.782771  716202 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:16:23.782784  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:23.834637  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:23.834683  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:23.834693  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:23.834699  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending
	I1026 14:16:23.834737  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending
	I1026 14:16:23.834742  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:23.834747  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:23.834757  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:23.834761  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:23.834769  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending
	I1026 14:16:23.834798  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:23.834815  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:23.834828  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:23.834833  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending
	I1026 14:16:23.834840  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:23.834849  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:23.834853  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending
	I1026 14:16:23.834857  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending
	I1026 14:16:23.834881  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending
	I1026 14:16:23.834891  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:16:23.834906  716202 retry.go:31] will retry after 363.438345ms: missing components: kube-dns
	I1026 14:16:23.871457  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.900412  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:24.208353  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:24.208410  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:24.208439  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:24.208457  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:16:24.208466  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:16:24.208475  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:24.208496  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:24.208507  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:24.208511  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:24.208528  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:24.208541  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:24.208546  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:24.208552  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:24.208576  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:24.208588  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:24.208596  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:24.208611  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:24.208618  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.208630  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.208649  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:16:24.208678  716202 retry.go:31] will retry after 438.728691ms: missing components: kube-dns
	I1026 14:16:24.224275  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:24.224906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:24.357877  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.475417  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:24.654845  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:24.654924  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:24.654950  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:24.654973  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:16:24.655022  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:16:24.655043  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:24.655062  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:24.655090  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:24.655111  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:24.655130  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:24.655153  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:24.655179  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:24.655199  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:24.655221  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:24.655245  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:24.655265  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:24.655288  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:24.655312  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.655331  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:24.655349  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:16:24.655377  716202 retry.go:31] will retry after 540.807434ms: missing components: kube-dns
	I1026 14:16:24.745840  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:24.746351  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:24.865221  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.902436  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:25.203524  716202 system_pods.go:86] 19 kube-system pods found
	I1026 14:16:25.203616  716202 system_pods.go:89] "coredns-66bc5c9577-5nrx2" [9ce8c52a-74a8-4ad0-915b-9389c8b81fcb] Running
	I1026 14:16:25.203644  716202 system_pods.go:89] "csi-hostpath-attacher-0" [0b4ef9be-0304-49e7-be1e-b6dcbd9bb22e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:16:25.203672  716202 system_pods.go:89] "csi-hostpath-resizer-0" [da9ee03e-3e3d-409e-bb17-7928dbb07b8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:16:25.203699  716202 system_pods.go:89] "csi-hostpathplugin-bdsts" [ad9d5498-66d7-43a2-851f-7363f58f805a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:16:25.203718  716202 system_pods.go:89] "etcd-addons-501661" [ce79e2fc-7af8-421d-a7fc-7d7caaa70b2a] Running
	I1026 14:16:25.203735  716202 system_pods.go:89] "kindnet-wggwr" [9691a455-81bf-446f-b103-d5d02349840f] Running
	I1026 14:16:25.203759  716202 system_pods.go:89] "kube-apiserver-addons-501661" [6dba0de7-4bdf-4600-a7eb-e134dfde8b8e] Running
	I1026 14:16:25.203779  716202 system_pods.go:89] "kube-controller-manager-addons-501661" [d0e5aa4f-320f-42b5-8f6d-60b2f0306cff] Running
	I1026 14:16:25.203805  716202 system_pods.go:89] "kube-ingress-dns-minikube" [53b96fc2-c641-40b8-bd50-2945c79ddf10] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:25.203823  716202 system_pods.go:89] "kube-proxy-rxl4x" [75a93d65-580a-45b3-a1c6-52b8c9ec85e6] Running
	I1026 14:16:25.203843  716202 system_pods.go:89] "kube-scheduler-addons-501661" [74c79575-f3c6-490f-9fce-e3ba470a5fa6] Running
	I1026 14:16:25.203864  716202 system_pods.go:89] "metrics-server-85b7d694d7-ljcz5" [4e56bfb7-dac1-4a05-b4a1-1f5440ece6c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:25.203885  716202 system_pods.go:89] "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:25.203906  716202 system_pods.go:89] "registry-6b586f9694-ndtxx" [84407522-f6d6-4ca4-8295-caec6faee6ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:25.203936  716202 system_pods.go:89] "registry-creds-764b6fb674-2fxp4" [811c0810-16ef-4371-bf68-45470eb5ca98] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:25.203957  716202 system_pods.go:89] "registry-proxy-26bjw" [95d7752b-839f-4c2e-9a0b-be3bea86c67f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:25.203977  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbl7s" [80b74e9f-a353-4e81-b2ce-1387eab89ccb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:25.204001  716202 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpxf6" [5b671b19-6ab8-465f-9381-f10dd2f974b0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:25.204019  716202 system_pods.go:89] "storage-provisioner" [4b26ef36-6ae1-43b2-a7ef-5ee16c202e72] Running
	I1026 14:16:25.204042  716202 system_pods.go:126] duration metric: took 1.697736705s to wait for k8s-apps to be running ...
	I1026 14:16:25.204062  716202 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 14:16:25.204141  716202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:16:25.217174  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:25.219412  716202 system_svc.go:56] duration metric: took 15.342715ms WaitForService to wait for kubelet
	I1026 14:16:25.219497  716202 kubeadm.go:586] duration metric: took 42.465538074s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:16:25.219531  716202 node_conditions.go:102] verifying NodePressure condition ...
	I1026 14:16:25.227910  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:25.231735  716202 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 14:16:25.231813  716202 node_conditions.go:123] node cpu capacity is 2
	I1026 14:16:25.231840  716202 node_conditions.go:105] duration metric: took 12.287672ms to run NodePressure ...
	I1026 14:16:25.231873  716202 start.go:241] waiting for startup goroutines ...
	I1026 14:16:25.355644  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.394846  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:25.715893  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:25.721959  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:25.858452  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.895166  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:26.218012  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:26.223075  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:26.362777  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.461348  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:26.719292  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:26.723810  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:26.844181  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:26.860547  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.894672  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:27.216629  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:27.246683  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:27.355016  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:27.394934  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:27.716181  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:27.722229  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:27.855897  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:27.895010  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:28.216742  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:28.229384  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.260387  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.416166033s)
	W1026 14:16:28.260427  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:28.260445  716202 retry.go:31] will retry after 18.725162743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:28.355892  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:28.395474  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:28.716012  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:28.721917  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.855649  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:28.897038  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:29.216977  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.221702  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:29.355121  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:29.395181  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:29.716513  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.721732  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:29.855304  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:29.894200  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.217414  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:30.221855  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:30.357186  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:30.457065  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.716227  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:30.721910  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:30.855245  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:30.894175  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:31.216646  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:31.225606  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:31.361855  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:31.395510  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:31.715920  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:31.722221  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:31.856101  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:31.895323  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:32.216167  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:32.228508  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:32.355027  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:32.395043  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:32.718193  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:32.722943  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:32.856985  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:32.895976  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:33.216779  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:33.225369  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:33.355485  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:33.394475  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:33.715804  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:33.722496  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:33.855703  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:33.894819  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:34.216103  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:34.222571  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:34.355330  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:34.395614  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:34.715946  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:34.723049  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:34.855602  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:34.894349  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:35.217274  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:35.226724  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:35.356504  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:35.456075  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:35.716396  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:35.722275  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:35.855872  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:35.956307  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:36.216633  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:36.222097  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:36.356093  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:36.395860  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:36.716633  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:36.723489  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:36.858181  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:36.896463  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:37.217458  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:37.222037  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:37.357088  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:37.395456  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:37.716782  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:37.722964  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:37.863737  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:37.895460  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.216998  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:38.222957  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:38.356124  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:38.395781  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.716154  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:38.721864  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:38.854834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:38.894514  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:39.215906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:39.221794  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.356011  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:39.394673  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:39.715647  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:39.721650  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.854644  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:39.894297  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:40.215684  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:40.221427  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:40.356291  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:40.394286  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:40.716097  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:40.721990  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:40.855825  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:40.895209  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:41.216592  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:41.221651  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:41.355042  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:41.395321  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:41.716194  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:41.722743  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:41.856099  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:41.895566  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:42.217508  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:42.223423  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:42.356382  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:42.394158  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:42.718391  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:42.722193  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:42.855071  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:42.894622  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:43.215411  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:43.222777  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:43.355520  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:43.394918  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:43.717776  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:43.723608  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:43.855523  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:43.895822  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:44.216184  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:44.223230  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:44.358151  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:44.396528  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:44.715739  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:44.721653  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:44.854659  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:44.894575  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:45.217986  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:45.222583  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:45.355036  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:45.394875  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:45.716334  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:45.721733  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:45.855401  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:45.894534  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.216152  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:46.222889  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:46.355283  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:46.394541  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.715714  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:46.721753  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:46.854990  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:46.895666  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.985730  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:47.217233  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:47.222718  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:47.355061  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:47.395802  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:47.717024  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:47.722148  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:47.855800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:47.894521  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.075306  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.089535563s)
	W1026 14:16:48.075399  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:48.075431  716202 retry.go:31] will retry after 30.211838015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:48.216539  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:48.221651  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:48.355282  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:48.394119  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.717111  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:48.722244  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:48.855737  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:48.894812  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:49.216387  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:49.222806  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:49.355335  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:49.394159  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:49.717271  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:49.725416  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:49.855290  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:49.894160  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:50.216733  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:50.221892  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:50.355152  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:50.395232  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:50.717228  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:50.722349  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:50.855937  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:50.894834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:51.216567  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:51.221706  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:51.355274  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:51.395555  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:51.716092  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:51.722624  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:51.855923  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:51.895402  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:52.217069  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:52.223472  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:52.357551  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:52.395341  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:52.716566  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:52.721898  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:52.855302  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:52.895397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:53.216227  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:53.222786  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:53.355503  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:53.394931  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:53.715598  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:53.721158  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:53.855901  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:53.894937  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:54.216780  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:54.222967  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:54.355344  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:54.394310  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:54.716287  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:54.722033  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:54.855008  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:54.894454  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:55.215720  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:55.222350  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:55.357572  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:55.394676  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:55.715536  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:55.721550  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:55.856151  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:55.895474  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:56.217033  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:56.222716  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:56.356052  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:56.456005  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:56.717560  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:56.722503  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:56.856431  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:56.895957  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:57.217288  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:57.223993  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:57.355812  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:57.394860  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:57.715850  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:57.722062  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:57.855016  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:57.895058  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:58.216676  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:58.221812  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:58.355080  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:58.395065  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:58.716742  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:58.721979  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:58.855491  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:58.894552  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:59.215796  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:59.222093  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:59.355949  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:59.395035  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:59.717560  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:59.721440  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:59.855735  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:59.894617  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:00.242688  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:00.249942  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:00.363437  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:00.413357  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:00.717218  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:00.723133  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:00.856815  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:00.895771  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:01.216371  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:01.225015  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:01.356819  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:01.395848  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:01.717518  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:01.722363  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:01.857153  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:01.896360  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:02.216804  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:02.223192  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:02.356445  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:02.395685  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:02.717275  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:02.723061  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:02.855980  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:02.895467  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:03.216626  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:03.222078  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:03.355641  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:03.395340  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:03.717361  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:03.723328  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:03.856502  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:03.895709  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:04.216819  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:04.222703  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:04.355570  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:04.394452  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:04.717365  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:04.722488  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:04.860288  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:04.964082  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:05.217908  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:05.228826  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:05.356443  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:05.395270  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:05.716800  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:05.722189  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:05.855447  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:05.895421  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:06.218100  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:06.229801  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:06.355079  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:06.396019  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:06.717139  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:06.722543  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:06.857058  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:06.897342  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:07.215919  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:07.231921  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:07.356555  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:07.395345  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:07.733637  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:07.733738  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:07.855383  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:07.894792  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:08.216675  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:08.221883  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:08.356007  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:08.394635  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:08.716255  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:08.722456  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:08.854996  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:08.895163  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:09.217407  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:09.227271  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:09.356167  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:09.395407  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:09.715900  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:09.721561  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:09.854809  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:09.895133  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:10.217065  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:10.222089  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:10.355433  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:10.394543  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:10.715834  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:10.721682  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:10.854906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:10.894900  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:11.216373  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:11.227692  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:11.355086  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:11.394924  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:11.716466  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:11.721310  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:11.855270  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:11.894825  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:12.216624  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:12.221823  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:12.355661  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:12.394373  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:12.717683  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:12.721417  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:12.855620  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:12.894906  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:13.216364  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:13.229448  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:13.357321  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:13.394285  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:13.716945  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:13.722313  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:13.856119  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:13.895858  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:14.216490  716202 kapi.go:107] duration metric: took 1m25.003915701s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 14:17:14.221416  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:14.355624  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:14.395127  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:14.723156  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:14.855377  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:14.894262  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.222478  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:15.357985  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:15.395179  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.721904  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:15.855147  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:15.895623  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:16.221929  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:16.355303  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:16.395006  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:16.723195  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:16.856025  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:16.955589  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:17.242252  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:17.356896  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:17.399683  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:17.722173  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:17.855684  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:17.894884  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:18.230456  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:18.287811  716202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:17:18.355087  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:18.395450  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:18.721924  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:18.855378  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:18.895381  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:19.234419  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:19.356526  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:19.394495  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:19.679918  716202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.392026421s)
	W1026 14:17:19.680008  716202 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:17:19.680133  716202 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 14:17:19.722167  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:19.855689  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:19.894625  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:20.222559  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:20.354973  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:20.399476  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:20.722070  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:20.856525  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:20.908821  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:21.228151  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:21.355657  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:21.394494  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:21.722167  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:21.855397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:21.898085  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:22.232339  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.363740  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:22.403873  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:22.722336  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.855724  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:22.895152  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:23.221689  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:23.355492  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:23.394669  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:23.725577  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:23.854859  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:23.894868  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:24.230793  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:24.355417  716202 kapi.go:107] duration metric: took 1m34.503877851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 14:17:24.395930  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:24.722780  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:24.895126  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:25.223566  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:25.394715  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:25.722610  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:25.894699  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:26.222956  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:26.395461  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:26.722011  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:26.895086  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:27.226295  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:27.394876  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:27.722628  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:27.894609  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:28.222223  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:28.394675  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:28.723231  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:28.896045  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:29.222794  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:29.395444  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:29.722033  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:29.895169  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:30.222081  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:30.395489  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:30.722888  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:30.895397  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:31.221733  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:31.395339  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:31.722135  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:31.894292  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:32.222733  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:32.395243  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:32.721781  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:32.895533  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:33.222178  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:33.394323  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:33.722604  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:33.895326  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:34.237847  716202 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:34.397382  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:34.721902  716202 kapi.go:107] duration metric: took 1m45.5035165s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 14:17:34.895028  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:35.394328  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:35.895322  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:36.396519  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:36.899743  716202 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:37.394525  716202 kapi.go:107] duration metric: took 1m44.503223137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 14:17:37.397340  716202 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-501661 cluster.
	I1026 14:17:37.400174  716202 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 14:17:37.402977  716202 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 14:17:37.405963  716202 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1026 14:17:37.408912  716202 addons.go:514] duration metric: took 1m54.654688472s for enable addons: enabled=[amd-gpu-device-plugin registry-creds cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1026 14:17:37.408972  716202 start.go:246] waiting for cluster config update ...
	I1026 14:17:37.408994  716202 start.go:255] writing updated cluster config ...
	I1026 14:17:37.409313  716202 ssh_runner.go:195] Run: rm -f paused
	I1026 14:17:37.412992  716202 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:17:37.495636  716202 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5nrx2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.503613  716202 pod_ready.go:94] pod "coredns-66bc5c9577-5nrx2" is "Ready"
	I1026 14:17:37.503644  716202 pod_ready.go:86] duration metric: took 7.977333ms for pod "coredns-66bc5c9577-5nrx2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.506148  716202 pod_ready.go:83] waiting for pod "etcd-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.511640  716202 pod_ready.go:94] pod "etcd-addons-501661" is "Ready"
	I1026 14:17:37.511715  716202 pod_ready.go:86] duration metric: took 5.537177ms for pod "etcd-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.514407  716202 pod_ready.go:83] waiting for pod "kube-apiserver-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.519914  716202 pod_ready.go:94] pod "kube-apiserver-addons-501661" is "Ready"
	I1026 14:17:37.519942  716202 pod_ready.go:86] duration metric: took 5.510428ms for pod "kube-apiserver-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.523200  716202 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:37.816869  716202 pod_ready.go:94] pod "kube-controller-manager-addons-501661" is "Ready"
	I1026 14:17:37.816901  716202 pod_ready.go:86] duration metric: took 293.672864ms for pod "kube-controller-manager-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:38.023383  716202 pod_ready.go:83] waiting for pod "kube-proxy-rxl4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:38.417690  716202 pod_ready.go:94] pod "kube-proxy-rxl4x" is "Ready"
	I1026 14:17:38.417717  716202 pod_ready.go:86] duration metric: took 394.251812ms for pod "kube-proxy-rxl4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:38.617729  716202 pod_ready.go:83] waiting for pod "kube-scheduler-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:39.016800  716202 pod_ready.go:94] pod "kube-scheduler-addons-501661" is "Ready"
	I1026 14:17:39.016830  716202 pod_ready.go:86] duration metric: took 399.073505ms for pod "kube-scheduler-addons-501661" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:17:39.016848  716202 pod_ready.go:40] duration metric: took 1.603819285s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:17:39.075659  716202 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 14:17:39.078910  716202 out.go:179] * Done! kubectl is now configured to use "addons-501661" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 14:18:07 addons-501661 crio[828]: time="2025-10-26T14:18:07.824297548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:18:07 addons-501661 crio[828]: time="2025-10-26T14:18:07.824839854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:18:07 addons-501661 crio[828]: time="2025-10-26T14:18:07.839228825Z" level=info msg="Created container c4b19f3b019532c08408423acd6eb8d062a2e4d9db1182d415daa0538da4b532: default/test-local-path/busybox" id=67ffa7dc-b4a7-4821-855c-3b9d38b97280 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:18:07 addons-501661 crio[828]: time="2025-10-26T14:18:07.840009532Z" level=info msg="Starting container: c4b19f3b019532c08408423acd6eb8d062a2e4d9db1182d415daa0538da4b532" id=f72b8b59-9bd4-4749-be0a-302bcffc9685 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:18:07 addons-501661 crio[828]: time="2025-10-26T14:18:07.842003901Z" level=info msg="Started container" PID=5488 containerID=c4b19f3b019532c08408423acd6eb8d062a2e4d9db1182d415daa0538da4b532 description=default/test-local-path/busybox id=f72b8b59-9bd4-4749-be0a-302bcffc9685 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00393ec961e51390328dff991c3c4c632d55e8d685b18f07a266b4c8b38047f2
	Oct 26 14:18:09 addons-501661 crio[828]: time="2025-10-26T14:18:09.384816839Z" level=info msg="Stopping pod sandbox: 00393ec961e51390328dff991c3c4c632d55e8d685b18f07a266b4c8b38047f2" id=32a4a452-43fb-4f4c-a6f9-e3e3b6b1a317 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 14:18:09 addons-501661 crio[828]: time="2025-10-26T14:18:09.385130785Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:00393ec961e51390328dff991c3c4c632d55e8d685b18f07a266b4c8b38047f2 UID:6a681a8b-d683-4d96-ae8a-1a4558877ac5 NetNS:/var/run/netns/45354513-3fce-41b4-8539-3072f12d4d3b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c8b90}] Aliases:map[]}"
	Oct 26 14:18:09 addons-501661 crio[828]: time="2025-10-26T14:18:09.385275525Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 26 14:18:09 addons-501661 crio[828]: time="2025-10-26T14:18:09.410694266Z" level=info msg="Stopped pod sandbox: 00393ec961e51390328dff991c3c4c632d55e8d685b18f07a266b4c8b38047f2" id=32a4a452-43fb-4f4c-a6f9-e3e3b6b1a317 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.921034907Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf/POD" id=fc14d1b7-0c87-4acd-9568-5d3fb9712314 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.921205435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.931788925Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf Namespace:local-path-storage ID:16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14 UID:d49efec8-55ca-4fb5-850c-af7ca8205d1f NetNS:/var/run/netns/78cc494b-f910-450b-82f2-fa3d10ddb43a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c9480}] Aliases:map[]}"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.931972368Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf to CNI network \"kindnet\" (type=ptp)"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.952623963Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf Namespace:local-path-storage ID:16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14 UID:d49efec8-55ca-4fb5-850c-af7ca8205d1f NetNS:/var/run/netns/78cc494b-f910-450b-82f2-fa3d10ddb43a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c9480}] Aliases:map[]}"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.952945696Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf for CNI network kindnet (type=ptp)"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.962578607Z" level=info msg="Ran pod sandbox 16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14 with infra container: local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf/POD" id=fc14d1b7-0c87-4acd-9568-5d3fb9712314 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.964846766Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=61e80d6f-f329-4728-8e38-d3500fcd9e1e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.9697716Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=da3bcb48-4e7a-4684-bb16-96fd5421b3a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.977994958Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf/helper-pod" id=533ebbfe-2625-4166-86ad-7caa237cffe0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.978133002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.99252291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:18:10 addons-501661 crio[828]: time="2025-10-26T14:18:10.993085129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:18:11 addons-501661 crio[828]: time="2025-10-26T14:18:11.027125049Z" level=info msg="Created container b5d8e2aa7bbb9a501ddb4e50e0b62b70c0ce45d0a6d5ac80a0d088db966bbfde: local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf/helper-pod" id=533ebbfe-2625-4166-86ad-7caa237cffe0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:18:11 addons-501661 crio[828]: time="2025-10-26T14:18:11.028468508Z" level=info msg="Starting container: b5d8e2aa7bbb9a501ddb4e50e0b62b70c0ce45d0a6d5ac80a0d088db966bbfde" id=6f142d4a-423d-47b4-ad90-7ce7cba9ebd8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:18:11 addons-501661 crio[828]: time="2025-10-26T14:18:11.035141005Z" level=info msg="Started container" PID=5618 containerID=b5d8e2aa7bbb9a501ddb4e50e0b62b70c0ce45d0a6d5ac80a0d088db966bbfde description=local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf/helper-pod id=6f142d4a-423d-47b4-ad90-7ce7cba9ebd8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	b5d8e2aa7bbb9       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   16703338ecdea       helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf   local-path-storage
	c4b19f3b01953       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   00393ec961e51       test-local-path                                              default
	ff2bd01a7a551       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   6bb3ad6ac7456       helper-pod-create-pvc-26a36dca-438b-4339-abca-53d25f00dbaf   local-path-storage
	d196a158da3c8       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   3c524e1943b6e       registry-test                                                default
	1a2edd4bfbc59       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   6f61f35080e24       busybox                                                      default
	9870f74d88fd1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 35 seconds ago       Running             gcp-auth                                 0                   ad802ef770dce       gcp-auth-78565c9fb4-vzzg7                                    gcp-auth
	0d0f4ac4419c2       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             38 seconds ago       Running             controller                               0                   259bd70bfa23a       ingress-nginx-controller-675c5ddd98-hrnwk                    ingress-nginx
	c4ec9e9442876       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          48 seconds ago       Running             csi-snapshotter                          0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                                     kube-system
	0c73c42d96770       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          49 seconds ago       Running             csi-provisioner                          0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                                     kube-system
	c50e91d190b6b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            51 seconds ago       Running             liveness-probe                           0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                                     kube-system
	a850489f8b2c4       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           52 seconds ago       Running             hostpath                                 0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                                     kube-system
	6e5248003ed1d       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             53 seconds ago       Exited              patch                                    3                   7aae9e1d9fc4a       gcp-auth-certs-patch-2snq2                                   gcp-auth
	e326676ba82b9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                53 seconds ago       Running             node-driver-registrar                    0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                                     kube-system
	42eeb20fc4611       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            55 seconds ago       Running             gadget                                   0                   ebfed1fd0c284       gadget-2t2bm                                                 gadget
	9be4b4714f4a0       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             55 seconds ago       Exited              patch                                    3                   8af895ee8a903       ingress-nginx-admission-patch-qmxvf                          ingress-nginx
	e7b0defbfd9a0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              59 seconds ago       Running             registry-proxy                           0                   2d4c1286a2f8c       registry-proxy-26bjw                                         kube-system
	ec7c2286fab64       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   337e1541c58c1       snapshot-controller-7d9fbc56b8-hpxf6                         kube-system
	6b9afdcd645ac       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   41f92b9da6aa9       csi-hostpath-resizer-0                                       kube-system
	eddafdd69a2fd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   0e2ce8d8af579       csi-hostpathplugin-bdsts                                     kube-system
	f11053563b42d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d057682f3b18d       snapshot-controller-7d9fbc56b8-dbl7s                         kube-system
	ccc376e383615       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   2fd242f598f9a       gcp-auth-certs-create-s2bnj                                  gcp-auth
	7164bdbbcc6c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   25c02f0f71529       ingress-nginx-admission-create-pptg4                         ingress-nginx
	82e271218789e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   3d88b63f4857e       kube-ingress-dns-minikube                                    kube-system
	b7521966d45ca       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   fcc067a694402       local-path-provisioner-648f6765c9-4fmxv                      local-path-storage
	613e459325bad       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   198b16435060c       yakd-dashboard-5ff678cb9-bdtjs                               yakd-dashboard
	637c3d5659f24       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   35f6f265994db       csi-hostpath-attacher-0                                      kube-system
	7d68d150ab8c2       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   2fa214e6160e4       nvidia-device-plugin-daemonset-j5x9f                         kube-system
	755c0c31c073f       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   b56d989a649f4       cloud-spanner-emulator-86bd5cbb97-rt9p8                      default
	65de879233549       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   705b0269b42c9       registry-6b586f9694-ndtxx                                    kube-system
	c136798b61600       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   491d5e4e25263       metrics-server-85b7d694d7-ljcz5                              kube-system
	53981aeb4a23e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   82a479e89e2bb       coredns-66bc5c9577-5nrx2                                     kube-system
	ffb41f5a461fd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   06063f345d464       storage-provisioner                                          kube-system
	44bf385182957       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   7f7acc02cd1a2       kindnet-wggwr                                                kube-system
	2b96a203a94a6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   96a06c342c64e       kube-proxy-rxl4x                                             kube-system
	b4c2f12d53270       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   222cb3dfcf334       etcd-addons-501661                                           kube-system
	fb9eabe84a99f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   cb2d5f0fb119b       kube-scheduler-addons-501661                                 kube-system
	ebd8af71508b5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   dc3ab474c62ca       kube-apiserver-addons-501661                                 kube-system
	90535ff6ce64e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   e8d8eea3c7820       kube-controller-manager-addons-501661                        kube-system
	
	
	==> coredns [53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76] <==
	[INFO] 10.244.0.17:47525 - 60777 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001624445s
	[INFO] 10.244.0.17:47525 - 43119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000118171s
	[INFO] 10.244.0.17:47525 - 17933 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000087648s
	[INFO] 10.244.0.17:41900 - 48821 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188629s
	[INFO] 10.244.0.17:41900 - 48567 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000247608s
	[INFO] 10.244.0.17:40949 - 4273 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010419s
	[INFO] 10.244.0.17:40949 - 4076 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179292s
	[INFO] 10.244.0.17:48133 - 37232 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107939s
	[INFO] 10.244.0.17:48133 - 37035 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126696s
	[INFO] 10.244.0.17:53295 - 65408 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001326827s
	[INFO] 10.244.0.17:53295 - 77 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001326573s
	[INFO] 10.244.0.17:33212 - 30323 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158967s
	[INFO] 10.244.0.17:33212 - 30184 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000216723s
	[INFO] 10.244.0.21:34949 - 17830 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154183s
	[INFO] 10.244.0.21:49462 - 35773 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000075701s
	[INFO] 10.244.0.21:57770 - 55207 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121592s
	[INFO] 10.244.0.21:59139 - 26975 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000072608s
	[INFO] 10.244.0.21:50634 - 40941 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084718s
	[INFO] 10.244.0.21:51166 - 20209 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081321s
	[INFO] 10.244.0.21:38188 - 63625 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002106926s
	[INFO] 10.244.0.21:56940 - 55435 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002170903s
	[INFO] 10.244.0.21:56319 - 15814 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001378208s
	[INFO] 10.244.0.21:45914 - 64890 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002121532s
	[INFO] 10.244.0.23:59919 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000192625s
	[INFO] 10.244.0.23:38208 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000177322s
	
	
	==> describe nodes <==
	Name:               addons-501661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-501661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=addons-501661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-501661
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-501661"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:15:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-501661
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:18:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:18:10 +0000   Sun, 26 Oct 2025 14:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:18:10 +0000   Sun, 26 Oct 2025 14:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:18:10 +0000   Sun, 26 Oct 2025 14:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:18:10 +0000   Sun, 26 Oct 2025 14:16:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-501661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                311e3ffa-44e3-4a34-9a3d-a90448f695e8
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-86bd5cbb97-rt9p8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-2t2bm                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gcp-auth                    gcp-auth-78565c9fb4-vzzg7                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-hrnwk                     100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m24s
	  kube-system                 coredns-66bc5c9577-5nrx2                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 csi-hostpathplugin-bdsts                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 etcd-addons-501661                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-wggwr                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-addons-501661                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-addons-501661                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-rxl4x                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-addons-501661                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 metrics-server-85b7d694d7-ljcz5                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m25s
	  kube-system                 nvidia-device-plugin-daemonset-j5x9f                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 registry-6b586f9694-ndtxx                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 registry-creds-764b6fb674-2fxp4                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 registry-proxy-26bjw                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 snapshot-controller-7d9fbc56b8-dbl7s                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-hpxf6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  local-path-storage          helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-4fmxv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bdtjs                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m28s                  kube-proxy       
	  Normal   Starting                 2m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m43s (x9 over 2m43s)  kubelet          Node addons-501661 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node addons-501661 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node addons-501661 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s                  kubelet          Node addons-501661 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s                  kubelet          Node addons-501661 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s                  kubelet          Node addons-501661 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m31s                  node-controller  Node addons-501661 event: Registered Node addons-501661 in Controller
	  Normal   NodeReady                109s                   kubelet          Node addons-501661 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 13:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 14:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 14:15] overlayfs: idmapped layers are currently not supported
	[  +0.080342] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616] <==
	{"level":"warn","ts":"2025-10-26T14:15:32.849544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.851791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.882448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.908102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.941114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.977074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:32.996027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.029526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.061473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.089907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.141675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.154138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.179630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.207126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.232867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.268917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.294661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.329657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:33.473947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:49.942909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:49.958366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.458313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.472117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.492140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:11.506126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51122","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [9870f74d88fd1169c4c4d0ff6a14410d72f85aa111abcaf0941672d3c4531fdf] <==
	2025/10/26 14:17:36 GCP Auth Webhook started!
	2025/10/26 14:17:39 Ready to marshal response ...
	2025/10/26 14:17:39 Ready to write response ...
	2025/10/26 14:17:39 Ready to marshal response ...
	2025/10/26 14:17:39 Ready to write response ...
	2025/10/26 14:17:39 Ready to marshal response ...
	2025/10/26 14:17:39 Ready to write response ...
	2025/10/26 14:17:59 Ready to marshal response ...
	2025/10/26 14:17:59 Ready to write response ...
	2025/10/26 14:18:02 Ready to marshal response ...
	2025/10/26 14:18:02 Ready to write response ...
	2025/10/26 14:18:02 Ready to marshal response ...
	2025/10/26 14:18:02 Ready to write response ...
	2025/10/26 14:18:10 Ready to marshal response ...
	2025/10/26 14:18:10 Ready to write response ...
	
	
	==> kernel <==
	 14:18:12 up  4:00,  0 user,  load average: 1.99, 2.58, 3.06
	Linux addons-501661 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8] <==
	I1026 14:16:14.540482       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:16:14.540515       1 metrics.go:72] Registering metrics
	I1026 14:16:14.540588       1 controller.go:711] "Syncing nftables rules"
	I1026 14:16:23.140890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:23.140930       1 main.go:301] handling current node
	I1026 14:16:33.139831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:33.139866       1 main.go:301] handling current node
	I1026 14:16:43.141866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:43.141893       1 main.go:301] handling current node
	I1026 14:16:53.140781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:53.140815       1 main.go:301] handling current node
	I1026 14:17:03.139065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:17:03.139103       1 main.go:301] handling current node
	I1026 14:17:13.140815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:17:13.140858       1 main.go:301] handling current node
	I1026 14:17:23.139842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:17:23.139871       1 main.go:301] handling current node
	I1026 14:17:33.146797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:17:33.146834       1 main.go:301] handling current node
	I1026 14:17:43.139976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:17:43.140077       1 main.go:301] handling current node
	I1026 14:17:53.139996       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:17:53.140031       1 main.go:301] handling current node
	I1026 14:18:03.139058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:18:03.139123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e] <==
	W1026 14:16:28.803631       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:16:28.803707       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:16:28.804017       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.806723       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.811658       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.833532       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.874840       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:28.957067       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:29.119017       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	E1026 14:16:29.439849       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.117.50:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.117.50:443: connect: connection refused" logger="UnhandledError"
	W1026 14:16:29.804329       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 14:16:29.804411       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:16:29.804503       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 14:16:29.804518       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1026 14:16:29.804435       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 14:16:29.805691       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 14:16:30.183951       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 14:17:49.113536       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58868: use of closed network connection
	E1026 14:17:49.473744       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58932: use of closed network connection
	
	
	==> kube-controller-manager [90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5] <==
	I1026 14:15:41.456880       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 14:15:41.456948       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:15:41.456985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:41.464649       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-501661" podCIDRs=["10.244.0.0/24"]
	I1026 14:15:41.496032       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 14:15:41.496127       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 14:15:41.496205       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-501661"
	I1026 14:15:41.496398       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 14:15:41.496865       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 14:15:41.496247       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 14:15:41.497005       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:15:41.499396       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 14:15:41.500390       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 14:15:41.500533       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:15:41.500656       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:15:41.511535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1026 14:15:47.897092       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1026 14:16:11.446855       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 14:16:11.450978       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1026 14:16:11.528758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 14:16:11.528891       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 14:16:11.528946       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 14:16:11.629753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:16:11.651397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:16:26.541790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11] <==
	I1026 14:15:43.063411       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:15:43.314321       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:15:43.422209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:15:43.422255       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:15:43.422329       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:15:43.517632       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:15:43.517696       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:15:43.549215       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:15:43.549548       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:15:43.549563       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:15:43.553350       1 config.go:200] "Starting service config controller"
	I1026 14:15:43.553377       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:15:43.553395       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:15:43.553399       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:15:43.553411       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:15:43.553415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:15:43.554114       1 config.go:309] "Starting node config controller"
	I1026 14:15:43.554133       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:15:43.554140       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:15:43.653685       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:15:43.653725       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:15:43.653758       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1] <==
	E1026 14:15:34.876061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:15:34.876131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:15:34.876188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:15:34.876241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 14:15:34.876291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:15:34.876465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:15:34.876510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:15:34.876813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:15:34.876905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:15:34.878537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:15:34.878627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:15:34.878677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:15:34.878725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:15:34.878775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:15:34.878907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:15:34.878959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:15:34.879075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 14:15:35.795133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:15:35.795291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:15:35.814527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:15:35.830775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:15:35.888414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:15:35.900737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:15:35.900745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1026 14:15:36.462893       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:18:09 addons-501661 kubelet[1300]: I1026 14:18:09.605009    1300 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a681a8b-d683-4d96-ae8a-1a4558877ac5-gcp-creds\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:09 addons-501661 kubelet[1300]: I1026 14:18:09.605049    1300 reconciler_common.go:299] "Volume detached for volume \"pvc-26a36dca-438b-4339-abca-53d25f00dbaf\" (UniqueName: \"kubernetes.io/host-path/6a681a8b-d683-4d96-ae8a-1a4558877ac5-pvc-26a36dca-438b-4339-abca-53d25f00dbaf\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:09 addons-501661 kubelet[1300]: I1026 14:18:09.605063    1300 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lf2kj\" (UniqueName: \"kubernetes.io/projected/6a681a8b-d683-4d96-ae8a-1a4558877ac5-kube-api-access-lf2kj\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:10 addons-501661 kubelet[1300]: I1026 14:18:10.411777    1300 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00393ec961e51390328dff991c3c4c632d55e8d685b18f07a266b4c8b38047f2"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: E1026 14:18:10.444249    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-501661\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-501661' and this object" podUID="6a681a8b-d683-4d96-ae8a-1a4558877ac5" pod="default/test-local-path"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: E1026 14:18:10.651471    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-501661\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-501661' and this object" podUID="6a681a8b-d683-4d96-ae8a-1a4558877ac5" pod="default/test-local-path"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: I1026 14:18:10.752025    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-gcp-creds\") pod \"helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") " pod="local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: I1026 14:18:10.752253    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-data\") pod \"helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") " pod="local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: I1026 14:18:10.752449    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/d49efec8-55ca-4fb5-850c-af7ca8205d1f-script\") pod \"helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") " pod="local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: I1026 14:18:10.752685    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl25f\" (UniqueName: \"kubernetes.io/projected/d49efec8-55ca-4fb5-850c-af7ca8205d1f-kube-api-access-xl25f\") pod \"helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") " pod="local-path-storage/helper-pod-delete-pvc-26a36dca-438b-4339-abca-53d25f00dbaf"
	Oct 26 14:18:10 addons-501661 kubelet[1300]: W1026 14:18:10.961080    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/33a58f25144bc0f5d18e144dfb9571be94789fcb878ef949d5bd924caeccf4f0/crio-16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14 WatchSource:0}: Error finding container 16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14: Status 404 returned error can't find the container with id 16703338ecdea0fdd5c5bdc4e7f3a5a7fbddcc70f1020fed0e0fa1d385833d14
	Oct 26 14:18:11 addons-501661 kubelet[1300]: E1026 14:18:11.453733    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-501661\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-501661' and this object" podUID="6a681a8b-d683-4d96-ae8a-1a4558877ac5" pod="default/test-local-path"
	Oct 26 14:18:11 addons-501661 kubelet[1300]: I1026 14:18:11.501256    1300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a681a8b-d683-4d96-ae8a-1a4558877ac5" path="/var/lib/kubelet/pods/6a681a8b-d683-4d96-ae8a-1a4558877ac5/volumes"
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.573803    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/d49efec8-55ca-4fb5-850c-af7ca8205d1f-script\") pod \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") "
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.573882    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl25f\" (UniqueName: \"kubernetes.io/projected/d49efec8-55ca-4fb5-850c-af7ca8205d1f-kube-api-access-xl25f\") pod \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") "
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.573907    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-data\") pod \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") "
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.573951    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-gcp-creds\") pod \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\" (UID: \"d49efec8-55ca-4fb5-850c-af7ca8205d1f\") "
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.574143    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d49efec8-55ca-4fb5-850c-af7ca8205d1f" (UID: "d49efec8-55ca-4fb5-850c-af7ca8205d1f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.574680    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49efec8-55ca-4fb5-850c-af7ca8205d1f-script" (OuterVolumeSpecName: "script") pod "d49efec8-55ca-4fb5-850c-af7ca8205d1f" (UID: "d49efec8-55ca-4fb5-850c-af7ca8205d1f"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.574722    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-data" (OuterVolumeSpecName: "data") pod "d49efec8-55ca-4fb5-850c-af7ca8205d1f" (UID: "d49efec8-55ca-4fb5-850c-af7ca8205d1f"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.590657    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49efec8-55ca-4fb5-850c-af7ca8205d1f-kube-api-access-xl25f" (OuterVolumeSpecName: "kube-api-access-xl25f") pod "d49efec8-55ca-4fb5-850c-af7ca8205d1f" (UID: "d49efec8-55ca-4fb5-850c-af7ca8205d1f"). InnerVolumeSpecName "kube-api-access-xl25f". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.674624    1300 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/d49efec8-55ca-4fb5-850c-af7ca8205d1f-script\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.674663    1300 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xl25f\" (UniqueName: \"kubernetes.io/projected/d49efec8-55ca-4fb5-850c-af7ca8205d1f-kube-api-access-xl25f\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.674674    1300 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-data\") on node \"addons-501661\" DevicePath \"\""
	Oct 26 14:18:12 addons-501661 kubelet[1300]: I1026 14:18:12.674683    1300 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d49efec8-55ca-4fb5-850c-af7ca8205d1f-gcp-creds\") on node \"addons-501661\" DevicePath \"\""
	
	
	==> storage-provisioner [ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df] <==
	W1026 14:17:46.850371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:48.853373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:48.857774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:50.860576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:50.865340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:52.867810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:52.872404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:54.876100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:54.882958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:56.886447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:56.894719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:58.897543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:17:58.904274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:00.907497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:00.912030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:02.915811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:02.920578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:04.924551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:04.929533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:06.933253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:06.938765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:08.941665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:08.946032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:10.949840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:18:10.954368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-501661 -n addons-501661
helpers_test.go:269: (dbg) Run:  kubectl --context addons-501661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf registry-creds-764b6fb674-2fxp4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-501661 describe pod ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf registry-creds-764b6fb674-2fxp4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-501661 describe pod ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf registry-creds-764b6fb674-2fxp4: exit status 1 (83.500771ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pptg4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qmxvf" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-2fxp4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-501661 describe pod ingress-nginx-admission-create-pptg4 ingress-nginx-admission-patch-qmxvf registry-creds-764b6fb674-2fxp4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable headlamp --alsologtostderr -v=1: exit status 11 (280.35134ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:13.910795  723634 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:13.911485  723634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:13.911500  723634 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:13.911505  723634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:13.911772  723634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:13.912129  723634 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:13.912511  723634 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:13.912530  723634 addons.go:606] checking whether the cluster is paused
	I1026 14:18:13.912637  723634 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:13.912653  723634 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:13.913192  723634 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:13.930809  723634 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:13.930879  723634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:13.950281  723634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:14.060091  723634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:14.060181  723634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:14.101995  723634 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:14.102018  723634 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:14.102024  723634 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:14.102028  723634 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:14.102036  723634 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:14.102074  723634 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:14.102085  723634 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:14.102089  723634 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:14.102092  723634 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:14.102098  723634 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:14.102109  723634 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:14.102113  723634 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:14.102116  723634 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:14.102146  723634 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:14.102150  723634 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:14.102166  723634 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:14.102177  723634 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:14.102181  723634 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:14.102185  723634 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:14.102188  723634 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:14.102194  723634 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:14.102197  723634 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:14.102201  723634 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:14.102204  723634 cri.go:89] found id: ""
	I1026 14:18:14.102274  723634 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:14.119447  723634 out.go:203] 
	W1026 14:18:14.122358  723634 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:14.122388  723634 out.go:285] * 
	* 
	W1026 14:18:14.128800  723634 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:14.132023  723634 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-rt9p8" [5ad94aa8-7e03-4ea6-b120-accab089b168] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003913231s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (325.739515ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:09.986089  722936 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:09.986920  722936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:09.986939  722936 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:09.986945  722936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:09.987335  722936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:09.987741  722936 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:09.988209  722936 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:09.988225  722936 addons.go:606] checking whether the cluster is paused
	I1026 14:18:09.988360  722936 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:09.988380  722936 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:09.989018  722936 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:10.028137  722936 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:10.028202  722936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:10.050679  722936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:10.172276  722936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:10.172377  722936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:10.211740  722936 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:10.211760  722936 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:10.211765  722936 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:10.211769  722936 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:10.211772  722936 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:10.211776  722936 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:10.211779  722936 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:10.211782  722936 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:10.211786  722936 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:10.211793  722936 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:10.211796  722936 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:10.211803  722936 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:10.211806  722936 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:10.211809  722936 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:10.211813  722936 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:10.211817  722936 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:10.211821  722936 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:10.211824  722936 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:10.211828  722936 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:10.211831  722936 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:10.211835  722936 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:10.211838  722936 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:10.211841  722936 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:10.211844  722936 cri.go:89] found id: ""
	I1026 14:18:10.211910  722936 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:10.231977  722936 out.go:203] 
	W1026 14:18:10.233842  722936 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:10.233868  722936 out.go:285] * 
	* 
	W1026 14:18:10.240842  722936 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:10.242888  722936 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-501661 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-501661 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-501661 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6a681a8b-d683-4d96-ae8a-1a4558877ac5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6a681a8b-d683-4d96-ae8a-1a4558877ac5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6a681a8b-d683-4d96-ae8a-1a4558877ac5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003059828s
addons_test.go:967: (dbg) Run:  kubectl --context addons-501661 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 ssh "cat /opt/local-path-provisioner/pvc-26a36dca-438b-4339-abca-53d25f00dbaf_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-501661 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-501661 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (428.285396ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:10.727846  723109 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:10.729916  723109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:10.729936  723109 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:10.729943  723109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:10.730225  723109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:10.730515  723109 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:10.730919  723109 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:10.730939  723109 addons.go:606] checking whether the cluster is paused
	I1026 14:18:10.731042  723109 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:10.731056  723109 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:10.731500  723109 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:10.771531  723109 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:10.771597  723109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:10.800474  723109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:10.919450  723109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:10.919531  723109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:10.985369  723109 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:10.985392  723109 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:10.985398  723109 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:10.985402  723109 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:10.985407  723109 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:10.985410  723109 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:10.985420  723109 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:10.985424  723109 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:10.985427  723109 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:10.985434  723109 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:10.985438  723109 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:10.985441  723109 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:10.985444  723109 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:10.985447  723109 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:10.985450  723109 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:10.985455  723109 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:10.985460  723109 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:10.985464  723109 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:10.985467  723109 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:10.985470  723109 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:10.985475  723109 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:10.985478  723109 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:10.985481  723109 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:10.985484  723109 cri.go:89] found id: ""
	I1026 14:18:10.985537  723109 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:11.024200  723109 out.go:203] 
	W1026 14:18:11.029853  723109 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:11.029886  723109 out.go:285] * 
	* 
	W1026 14:18:11.054729  723109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:11.059873  723109 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j5x9f" [1901d15f-6cf6-4f1b-9fe4-ed4308c25f90] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003758519s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (282.828866ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:18:02.092984  722593 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:18:02.093862  722593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:02.093905  722593 out.go:374] Setting ErrFile to fd 2...
	I1026 14:18:02.093928  722593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:18:02.094234  722593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:18:02.094584  722593 mustload.go:65] Loading cluster: addons-501661
	I1026 14:18:02.095037  722593 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:02.095078  722593 addons.go:606] checking whether the cluster is paused
	I1026 14:18:02.095217  722593 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:18:02.095249  722593 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:18:02.095758  722593 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:18:02.113856  722593 ssh_runner.go:195] Run: systemctl --version
	I1026 14:18:02.113924  722593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:18:02.132829  722593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:18:02.248234  722593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:18:02.248326  722593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:18:02.280250  722593 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:18:02.280269  722593 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:18:02.280275  722593 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:18:02.280279  722593 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:18:02.280283  722593 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:18:02.280287  722593 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:18:02.280290  722593 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:18:02.280294  722593 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:18:02.280297  722593 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:18:02.280305  722593 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:18:02.280308  722593 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:18:02.280311  722593 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:18:02.280315  722593 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:18:02.280318  722593 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:18:02.280321  722593 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:18:02.280330  722593 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:18:02.280334  722593 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:18:02.280338  722593 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:18:02.280342  722593 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:18:02.280345  722593 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:18:02.280350  722593 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:18:02.280353  722593 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:18:02.280356  722593 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:18:02.280359  722593 cri.go:89] found id: ""
	I1026 14:18:02.280409  722593 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:18:02.297836  722593 out.go:203] 
	W1026 14:18:02.302434  722593 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:18:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:18:02.302468  722593 out.go:285] * 
	* 
	W1026 14:18:02.310443  722593 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:18:02.314129  722593 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bdtjs" [62de20f9-1642-48ac-8427-86729d8a28c9] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00352005s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-501661 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-501661 addons disable yakd --alsologtostderr -v=1: exit status 11 (271.587769ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:17:55.814884  722498 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:17:55.816015  722498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:17:55.816040  722498 out.go:374] Setting ErrFile to fd 2...
	I1026 14:17:55.816046  722498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:17:55.816313  722498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:17:55.816640  722498 mustload.go:65] Loading cluster: addons-501661
	I1026 14:17:55.817123  722498 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:17:55.817145  722498 addons.go:606] checking whether the cluster is paused
	I1026 14:17:55.817253  722498 config.go:182] Loaded profile config "addons-501661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:17:55.817269  722498 host.go:66] Checking if "addons-501661" exists ...
	I1026 14:17:55.817730  722498 cli_runner.go:164] Run: docker container inspect addons-501661 --format={{.State.Status}}
	I1026 14:17:55.835694  722498 ssh_runner.go:195] Run: systemctl --version
	I1026 14:17:55.835766  722498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-501661
	I1026 14:17:55.855111  722498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/addons-501661/id_rsa Username:docker}
	I1026 14:17:55.959371  722498 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:17:55.959462  722498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:17:55.988856  722498 cri.go:89] found id: "c4ec9e9442876868d6f3ccd753e8e2504825be9e25716a9362fc1bda132040f1"
	I1026 14:17:55.988881  722498 cri.go:89] found id: "0c73c42d9677058df1e87c6d104d039511c854bc396839bf6b210ebb11f20807"
	I1026 14:17:55.988886  722498 cri.go:89] found id: "c50e91d190b6b8aba92f0048049d7b5a6c7e4a6ee7909396c49707c059b1758e"
	I1026 14:17:55.988890  722498 cri.go:89] found id: "a850489f8b2c4101d9dd30788611d5487771ff0e49d5b13e7ba88b663394ce6b"
	I1026 14:17:55.988894  722498 cri.go:89] found id: "e326676ba82b967034ff22a3f58121d718f63248e8cd03c2711942c5ab58f110"
	I1026 14:17:55.988898  722498 cri.go:89] found id: "e7b0defbfd9a0fbc34b3847006afd1e34f175960e32dc9f93a19ee3872b2334a"
	I1026 14:17:55.988901  722498 cri.go:89] found id: "ec7c2286fab64d68869082e91ae05ae52e747621a1ed9ec0a6b0a4846cb10d29"
	I1026 14:17:55.988904  722498 cri.go:89] found id: "6b9afdcd645ace6e53d398cfb18b908e4e3f8d759533054033d53c88c3991bcb"
	I1026 14:17:55.988907  722498 cri.go:89] found id: "eddafdd69a2fd73dc14f14b9ae33cc5f2f2771b532cd4f871cc87b7d35ba59b0"
	I1026 14:17:55.988913  722498 cri.go:89] found id: "f11053563b42d2b88de4114903a45308e18ec8d69977139bb596d20ec57de700"
	I1026 14:17:55.988917  722498 cri.go:89] found id: "82e271218789e40dcc6df229c408e53f63917ccfab45bfc50204ffc09ad42062"
	I1026 14:17:55.988920  722498 cri.go:89] found id: "637c3d5659f24349e28fa6ad3a8564a13faa6ecdf7b11bc53b11f18842adc2cd"
	I1026 14:17:55.988924  722498 cri.go:89] found id: "7d68d150ab8c2563d15cc0e73d46228ee7fa079ef8777cba7f6f3520a4612110"
	I1026 14:17:55.988927  722498 cri.go:89] found id: "65de879233549adf2e97085418294654f941586dc41d8979bd625c6ac63d9078"
	I1026 14:17:55.988931  722498 cri.go:89] found id: "c136798b616003b15c2fe6381c1384b0db195fa1b56b2cf8b0fa232fed5c3775"
	I1026 14:17:55.988936  722498 cri.go:89] found id: "53981aeb4a23e1afc338599d3d4d9c00d9c612bf7f41b5520f8df49437116d76"
	I1026 14:17:55.988939  722498 cri.go:89] found id: "ffb41f5a461fd4bea49f2f0b470a41f63eb9a79c18057a008bca507bf8f369df"
	I1026 14:17:55.988945  722498 cri.go:89] found id: "44bf38518295794a5bda48e0b0b0cd9fbe4b9d21283c3913eeb493d42d8831f8"
	I1026 14:17:55.988948  722498 cri.go:89] found id: "2b96a203a94a6a1ffbf956f7989e49a515512d93b16fb6662b90a4acf1d01e11"
	I1026 14:17:55.988951  722498 cri.go:89] found id: "b4c2f12d53270dadeba34bdb2b40bc918a201d5b0260aff9240a30cf3c178616"
	I1026 14:17:55.988956  722498 cri.go:89] found id: "fb9eabe84a99f514b36f0d2d6aef958614aa6e1b8fce581ee2406a18d582b2c1"
	I1026 14:17:55.988962  722498 cri.go:89] found id: "ebd8af71508b5aa19b7a3f1885aa0cf27a6f8b8057599b98c21e69cc7bcf693e"
	I1026 14:17:55.988965  722498 cri.go:89] found id: "90535ff6ce64e543229cbe45a34b8202994d3a4fc590a8538ef2e9a459ddd5a5"
	I1026 14:17:55.988968  722498 cri.go:89] found id: ""
	I1026 14:17:55.989021  722498 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:17:56.006997  722498 out.go:203] 
	W1026 14:17:56.012527  722498 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:17:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:17:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:17:56.012565  722498 out.go:285] * 
	* 
	W1026 14:17:56.019341  722498 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:17:56.023742  722498 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-501661 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-707472 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-707472 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-49q5b" [067f19bc-67f4-4787-9531-7dc6388a40d2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1026 14:27:39.980415  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:28:07.690356  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:32:39.980443  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-707472 -n functional-707472
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-26 14:35:39.20403219 +0000 UTC m=+1270.450756909
functional_test.go:1645: (dbg) Run:  kubectl --context functional-707472 describe po hello-node-connect-7d85dfc575-49q5b -n default
functional_test.go:1645: (dbg) kubectl --context functional-707472 describe po hello-node-connect-7d85dfc575-49q5b -n default:
Name:             hello-node-connect-7d85dfc575-49q5b
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-707472/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:25:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgg76 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bgg76:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-49q5b to functional-707472
Normal   Pulling    6m55s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-707472 logs hello-node-connect-7d85dfc575-49q5b -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-707472 logs hello-node-connect-7d85dfc575-49q5b -n default: exit status 1 (91.503308ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-49q5b" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-707472 logs hello-node-connect-7d85dfc575-49q5b -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-707472 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-49q5b
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-707472/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:25:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgg76 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bgg76:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-49q5b to functional-707472
Normal   Pulling    6m55s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-707472 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-707472 logs -l app=hello-node-connect: exit status 1 (83.717773ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-49q5b" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-707472 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-707472 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.219.156
IPs:                      10.103.219.156
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31199/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-707472
helpers_test.go:243: (dbg) docker inspect functional-707472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d",
	        "Created": "2025-10-26T14:22:19.942612481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:22:20.00780219Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d/hostname",
	        "HostsPath": "/var/lib/docker/containers/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d/hosts",
	        "LogPath": "/var/lib/docker/containers/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d-json.log",
	        "Name": "/functional-707472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-707472:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-707472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d",
	                "LowerDir": "/var/lib/docker/overlay2/8d5d367a123e324bd7036d3760c0e968ce2661d11031901096a82a0d04a72e8d-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d5d367a123e324bd7036d3760c0e968ce2661d11031901096a82a0d04a72e8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d5d367a123e324bd7036d3760c0e968ce2661d11031901096a82a0d04a72e8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d5d367a123e324bd7036d3760c0e968ce2661d11031901096a82a0d04a72e8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-707472",
	                "Source": "/var/lib/docker/volumes/functional-707472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-707472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-707472",
	                "name.minikube.sigs.k8s.io": "functional-707472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afefd68fcad88b89f8959086c56079939d05bf6547871f32ac9434095f51d4e2",
	            "SandboxKey": "/var/run/docker/netns/afefd68fcad8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-707472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d6:87:6b:a6:8f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95fc817f78c4e0451863f231a757e5cbc4e23e5ce86c5b9cfb318ad31feb7188",
	                    "EndpointID": "61ea5456d204b23515614a7a071c73596b4e923c15356166cb25d04cf570fb2f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-707472",
	                        "281808eb03cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-707472 -n functional-707472
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 logs -n 25: (1.546415965s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-707472 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ ssh            │ functional-707472 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ ssh            │ functional-707472 ssh -- ls -la /mount-9p                                                                          │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ ssh            │ functional-707472 ssh sudo umount -f /mount-9p                                                                     │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ mount          │ -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount1 --alsologtostderr -v=1 │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ mount          │ -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount2 --alsologtostderr -v=1 │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ mount          │ -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount3 --alsologtostderr -v=1 │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ ssh            │ functional-707472 ssh findmnt -T /mount1                                                                           │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ ssh            │ functional-707472 ssh findmnt -T /mount2                                                                           │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ ssh            │ functional-707472 ssh findmnt -T /mount3                                                                           │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ mount          │ -p functional-707472 --kill=true                                                                                   │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ start          │ -p functional-707472 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ start          │ -p functional-707472 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ start          │ -p functional-707472 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-707472 --alsologtostderr -v=1                                                     │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ update-context │ functional-707472 update-context --alsologtostderr -v=2                                                            │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ update-context │ functional-707472 update-context --alsologtostderr -v=2                                                            │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ update-context │ functional-707472 update-context --alsologtostderr -v=2                                                            │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ image          │ functional-707472 image ls --format short --alsologtostderr                                                        │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ image          │ functional-707472 image ls --format yaml --alsologtostderr                                                         │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ ssh            │ functional-707472 ssh pgrep buildkitd                                                                              │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │                     │
	│ image          │ functional-707472 image build -t localhost/my-image:functional-707472 testdata/build --alsologtostderr             │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ image          │ functional-707472 image ls                                                                                         │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ image          │ functional-707472 image ls --format json --alsologtostderr                                                         │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	│ image          │ functional-707472 image ls --format table --alsologtostderr                                                        │ functional-707472 │ jenkins │ v1.37.0 │ 26 Oct 25 14:35 UTC │ 26 Oct 25 14:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:35:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:35:20.677663  742965 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:35:20.677779  742965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:35:20.677790  742965 out.go:374] Setting ErrFile to fd 2...
	I1026 14:35:20.677795  742965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:35:20.678155  742965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:35:20.678527  742965 out.go:368] Setting JSON to false
	I1026 14:35:20.679402  742965 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15473,"bootTime":1761473848,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:35:20.679505  742965 start.go:141] virtualization:  
	I1026 14:35:20.682703  742965 out.go:179] * [functional-707472] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1026 14:35:20.686507  742965 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:35:20.686557  742965 notify.go:220] Checking for updates...
	I1026 14:35:20.690258  742965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:35:20.693096  742965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:35:20.695898  742965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:35:20.698836  742965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 14:35:20.701705  742965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:35:20.705097  742965 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:35:20.705673  742965 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:35:20.740921  742965 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:35:20.741126  742965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:35:20.808633  742965 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 14:35:20.798565361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:35:20.808908  742965 docker.go:318] overlay module found
	I1026 14:35:20.812091  742965 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1026 14:35:20.815011  742965 start.go:305] selected driver: docker
	I1026 14:35:20.815034  742965 start.go:925] validating driver "docker" against &{Name:functional-707472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-707472 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:35:20.815139  742965 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:35:20.818709  742965 out.go:203] 
	W1026 14:35:20.821725  742965 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 14:35:20.824626  742965 out.go:203] 
	
	
	==> CRI-O <==
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.119401719Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f738106d-8a15-42ee-a4ad-17bc73656522 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.121403838Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e24f362c-5262-4a50-b495-2e3ffd0283c0 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.123146116Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.123774232Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9026312b-6abe-4dff-92d0-dcd2cc700cdd name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.130604763Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zbpt6/kubernetes-dashboard" id=e43c9a33-38ab-4c51-9a4e-20d386f7932a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.13073987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.135986575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.136194815Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dd7ed4ef235f4795d576656d44a129b2873c6eec08d4e5d1296876d85c751cec/merged/etc/group: no such file or directory"
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.136543487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.154902243Z" level=info msg="Created container 426ceeb1bb77a47ec799290a9b74201b5a740da3a55bf02efc4ecb4f664e158e: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zbpt6/kubernetes-dashboard" id=e43c9a33-38ab-4c51-9a4e-20d386f7932a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.157015691Z" level=info msg="Starting container: 426ceeb1bb77a47ec799290a9b74201b5a740da3a55bf02efc4ecb4f664e158e" id=1b0dbd25-8c50-4b68-aaa2-6536704dd53e name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.158852106Z" level=info msg="Started container" PID=6729 containerID=426ceeb1bb77a47ec799290a9b74201b5a740da3a55bf02efc4ecb4f664e158e description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zbpt6/kubernetes-dashboard id=1b0dbd25-8c50-4b68-aaa2-6536704dd53e name=/runtime.v1.RuntimeService/StartContainer sandboxID=63d2d2ad9ddae83b64ad1e9bbc7820ac0d1fc95e3e18ac44f86b22d8706d1e03
	Oct 26 14:35:27 functional-707472 crio[3496]: time="2025-10-26T14:35:27.397863341Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.429522758Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=e24f362c-5262-4a50-b495-2e3ffd0283c0 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.430113951Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d7ec6b14-dea8-4b00-b751-3135a3072f4f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.434000364Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=809dda3d-6093-4173-965f-2f2c144f4045 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.441506519Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-f9b46/dashboard-metrics-scraper" id=fa2353bb-4bbe-4ea4-a53a-9cf7d38cf7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.441642332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.45143349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.451827283Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3a4ae59fdd17a78584099bbd5c44625a6bf44735234e7ab85a5f383ebe72c86e/merged/etc/group: no such file or directory"
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.452292337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.467863563Z" level=info msg="Created container ee64f81e2a098f6d7393277c098daf80ff53e8ac6233b304991797e0714b9daa: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-f9b46/dashboard-metrics-scraper" id=fa2353bb-4bbe-4ea4-a53a-9cf7d38cf7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.468789849Z" level=info msg="Starting container: ee64f81e2a098f6d7393277c098daf80ff53e8ac6233b304991797e0714b9daa" id=65414b2b-fa5d-4042-a3a4-dd25a739fda8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:35:28 functional-707472 crio[3496]: time="2025-10-26T14:35:28.470870033Z" level=info msg="Started container" PID=6771 containerID=ee64f81e2a098f6d7393277c098daf80ff53e8ac6233b304991797e0714b9daa description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-f9b46/dashboard-metrics-scraper id=65414b2b-fa5d-4042-a3a4-dd25a739fda8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=23df72cd5c0195f3e7e4cb76312248c6c75f08f7a862509c5d93854ef5c6051f
	Oct 26 14:35:36 functional-707472 crio[3496]: time="2025-10-26T14:35:36.831474904Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0e1dd481-8182-4709-b3b6-85a095fb178f name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ee64f81e2a098       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   12 seconds ago      Running             dashboard-metrics-scraper   0                   23df72cd5c019       dashboard-metrics-scraper-77bf4d6c4c-f9b46   kubernetes-dashboard
	426ceeb1bb77a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         13 seconds ago      Running             kubernetes-dashboard        0                   63d2d2ad9ddae       kubernetes-dashboard-855c9754f9-zbpt6        kubernetes-dashboard
	d27028f311430       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              27 seconds ago      Exited              mount-munger                0                   8ef7dbbdff41c       busybox-mount                                default
	17a97dd09bd61       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f                  10 minutes ago      Running             myfrontend                  0                   4f00738fe52af       sp-pod                                       default
	1ba71a2b82b72       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                  10 minutes ago      Running             nginx                       0                   89544807a76ee       nginx-svc                                    default
	fadc9368bcca0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   beadbf0100952       coredns-66bc5c9577-b9xl8                     kube-system
	a2ed11c06a2d1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 2                   4a19f023ef87c       kindnet-psh5n                                kube-system
	890988ab0c433       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         2                   9ea628cc29b23       storage-provisioner                          kube-system
	c5d95d226424b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  2                   c8d01d90d8337       kube-proxy-kjbcx                             kube-system
	e144436ce571c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   2ba445105c3da       kube-apiserver-functional-707472             kube-system
	482e9d4b35b4d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     2                   2ffa0b8fb3f12       kube-controller-manager-functional-707472    kube-system
	2afe1f7ddd930       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              2                   46b2d298a4a4d       kube-scheduler-functional-707472             kube-system
	b7b2197be4f88       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        2                   53b723772f9d1       etcd-functional-707472                       kube-system
	6b9525d78f1df       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        1                   53b723772f9d1       etcd-functional-707472                       kube-system
	a16beecd703ca       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 1                   4a19f023ef87c       kindnet-psh5n                                kube-system
	27d2f4ab1e829       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  1                   c8d01d90d8337       kube-proxy-kjbcx                             kube-system
	a8ab3b1e9af14       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         1                   9ea628cc29b23       storage-provisioner                          kube-system
	b60ea6a7e5f00       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   beadbf0100952       coredns-66bc5c9577-b9xl8                     kube-system
	19a1752a506d8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              1                   46b2d298a4a4d       kube-scheduler-functional-707472             kube-system
	fa38702321d1b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     1                   2ffa0b8fb3f12       kube-controller-manager-functional-707472    kube-system
	
	
	==> coredns [b60ea6a7e5f00e7a695a80ba874ea3904054813601adf3f6f3a7d4a4d415aa00] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33327 - 23581 "HINFO IN 772534175876923152.2712183673272208905. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025728133s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fadc9368bcca0191727c2a147b6ffc02a30afb5b81c2026c88a56d7afdcd907b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38480 - 41355 "HINFO IN 630013851635637046.2809538329490895623. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02593512s
	
	
	==> describe nodes <==
	Name:               functional-707472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-707472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=functional-707472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_22_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:22:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-707472
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:35:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:35:36 +0000   Sun, 26 Oct 2025 14:22:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:35:36 +0000   Sun, 26 Oct 2025 14:22:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:35:36 +0000   Sun, 26 Oct 2025 14:22:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:35:36 +0000   Sun, 26 Oct 2025 14:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-707472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fff2b4d3-74f0-44f9-ba6c-32b41d9e1ccb
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-4rn47                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-49q5b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-b9xl8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-707472                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-psh5n                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-707472              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-707472     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kjbcx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-707472              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-f9b46    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zbpt6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-707472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-707472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-707472 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-707472 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-707472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-707472 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-707472 event: Registered Node functional-707472 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-707472 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-707472 event: Registered Node functional-707472 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-707472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-707472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-707472 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-707472 event: Registered Node functional-707472 in Controller
	
	
	==> dmesg <==
	[Oct26 13:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 14:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct26 14:15] overlayfs: idmapped layers are currently not supported
	[  +0.080342] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct26 14:21] overlayfs: idmapped layers are currently not supported
	[Oct26 14:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6b9525d78f1df3a2fb33106fa197160ea1335a3770899ad2e746ecb150272e81] <==
	{"level":"warn","ts":"2025-10-26T14:23:47.257657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:23:47.270343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:23:47.329836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:23:47.370156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:23:47.372311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:23:47.388557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:23:47.450361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:24:12.190985Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T14:24:12.191041Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-707472","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T14:24:12.191155Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:24:12.478095Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:24:12.478215Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:24:12.478277Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-26T14:24:12.478358Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-26T14:24:12.478379Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:24:12.478420Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:24:12.478428Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:24:12.478406Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T14:24:12.478477Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:24:12.478492Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:24:12.478500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:24:12.482290Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T14:24:12.482383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:24:12.482464Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T14:24:12.482494Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-707472","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b7b2197be4f8825509ad9309291bcff7ebf5e3c816ef39b177d55b3d83c95e84] <==
	{"level":"warn","ts":"2025-10-26T14:24:31.475074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.507748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.535292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.576856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.653462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.669350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.724784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.765688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.781619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.807804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.844251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.869565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.893248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.925459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.943001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:31.978874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:32.011622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:32.049211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:32.079246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:32.115889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:32.141004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:24:32.272990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42560","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:34:30.473247Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1139}
	{"level":"info","ts":"2025-10-26T14:34:30.497550Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1139,"took":"24.01398ms","hash":1080763091,"current-db-size-bytes":3362816,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-26T14:34:30.497605Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1080763091,"revision":1139,"compact-revision":-1}
	
	
	==> kernel <==
	 14:35:41 up  4:18,  0 user,  load average: 0.77, 0.57, 1.44
	Linux functional-707472 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a16beecd703ca66c7fbf358b134c813407136fa0af5a860aff7ffd9804351a02] <==
	I1026 14:23:44.022033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 14:23:44.028615       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 14:23:44.029077       1 main.go:148] setting mtu 1500 for CNI 
	I1026 14:23:44.030470       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 14:23:44.032786       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T14:23:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 14:23:44.257807       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 14:23:44.257900       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 14:23:44.257935       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 14:23:44.267915       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 14:23:48.668886       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:23:48.668931       1 metrics.go:72] Registering metrics
	I1026 14:23:48.669003       1 controller.go:711] "Syncing nftables rules"
	I1026 14:23:54.258093       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:54.258147       1 main.go:301] handling current node
	I1026 14:24:04.257581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:04.257639       1 main.go:301] handling current node
	
	
	==> kindnet [a2ed11c06a2d1fbe05249b6b9ff1a621129b3891219d150b1ee60663f165bb39] <==
	I1026 14:33:34.538450       1 main.go:301] handling current node
	I1026 14:33:44.538841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:44.538878       1 main.go:301] handling current node
	I1026 14:33:54.538856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:54.538889       1 main.go:301] handling current node
	I1026 14:34:04.539321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:04.539354       1 main.go:301] handling current node
	I1026 14:34:14.544884       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:14.544926       1 main.go:301] handling current node
	I1026 14:34:24.538836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:24.538870       1 main.go:301] handling current node
	I1026 14:34:34.540781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:34.540894       1 main.go:301] handling current node
	I1026 14:34:44.540805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:44.540935       1 main.go:301] handling current node
	I1026 14:34:54.538942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:54.538975       1 main.go:301] handling current node
	I1026 14:35:04.538866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:35:04.541343       1 main.go:301] handling current node
	I1026 14:35:14.538894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:35:14.538947       1 main.go:301] handling current node
	I1026 14:35:24.538895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:35:24.538938       1 main.go:301] handling current node
	I1026 14:35:34.541404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:35:34.541434       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e144436ce571c1476dff2c83c796157be52ffcde89b0c7faf6296d8564c9e69c] <==
	I1026 14:24:33.326737       1 aggregator.go:171] initial CRD sync complete...
	I1026 14:24:33.326753       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 14:24:33.326760       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 14:24:33.326765       1 cache.go:39] Caches are synced for autoregister controller
	E1026 14:24:33.327964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 14:24:33.366862       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 14:24:33.864678       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 14:24:34.105288       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 14:24:35.345222       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 14:24:35.505278       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 14:24:35.585061       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 14:24:35.592851       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 14:24:36.926790       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 14:24:37.029437       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 14:24:37.077344       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 14:24:51.519635       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.114.158"}
	I1026 14:25:00.646085       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.45.105"}
	I1026 14:25:04.409350       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.36.20"}
	E1026 14:25:31.279514       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53838: use of closed network connection
	E1026 14:25:38.500772       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53882: use of closed network connection
	I1026 14:25:38.848313       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.219.156"}
	I1026 14:34:33.267678       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 14:35:21.875881       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 14:35:22.211201       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.51.169"}
	I1026 14:35:22.231718       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.227.138"}
	
	
	==> kube-controller-manager [482e9d4b35b4d10a2ae15a80e725e7b8fd0be941f0c4ff291dc7183280c1a2f3] <==
	I1026 14:24:36.769935       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 14:24:36.769980       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 14:24:36.774995       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:24:36.775304       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:24:36.775673       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:24:36.776778       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 14:24:36.779356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:24:36.782425       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 14:24:36.786570       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 14:24:36.793138       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:24:36.793288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 14:24:36.793340       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:24:36.793409       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:24:36.795673       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 14:24:36.836080       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:24:36.836188       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 14:24:36.836219       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1026 14:35:21.986433       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:21.998391       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:22.023159       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:22.034742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:22.039677       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:22.056453       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:22.056591       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:35:22.065029       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [fa38702321d1b62a981604948731c904d71e02c5a1c7f6f226f76a7291bba746] <==
	I1026 14:23:51.848584       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 14:23:51.848816       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:23:51.849067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:23:51.849165       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 14:23:51.854132       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 14:23:51.856055       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:23:51.861607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:23:51.861687       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 14:23:51.861781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 14:23:51.864344       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 14:23:51.866146       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 14:23:51.869343       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 14:23:51.882556       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 14:23:51.887904       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:23:51.889103       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 14:23:51.889187       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 14:23:51.889277       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-707472"
	I1026 14:23:51.889353       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 14:23:51.893577       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 14:23:51.897632       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 14:23:51.897894       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 14:23:51.899134       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:23:51.899838       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 14:23:51.906124       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:23:51.911945       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [27d2f4ab1e8299770f03ab16ab49d3e72cb42d25d6f28fbc9d8c2148e832b824] <==
	I1026 14:23:47.049477       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:23:47.709733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:23:48.628886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:23:48.628924       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:23:48.628987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:23:48.915991       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:23:48.916051       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:23:48.968926       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:23:48.969254       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:23:48.969269       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:23:48.973708       1 config.go:200] "Starting service config controller"
	I1026 14:23:48.973730       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:23:48.982685       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:23:48.982767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:23:48.982807       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:23:48.982834       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:23:48.983548       1 config.go:309] "Starting node config controller"
	I1026 14:23:48.983609       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:23:48.983639       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:23:49.076105       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:23:49.085515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:23:49.085537       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5d95d226424b6a95b791f3cc8299f555d48c7e6e58c979686e67b6e2ed98700] <==
	I1026 14:24:34.279264       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:24:34.375163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:24:34.477412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:24:34.477524       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:24:34.477669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:24:34.496966       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:24:34.497082       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:24:34.501075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:24:34.501438       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:24:34.501679       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:24:34.503391       1 config.go:200] "Starting service config controller"
	I1026 14:24:34.503448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:24:34.503497       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:24:34.503523       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:24:34.503558       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:24:34.503585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:24:34.504617       1 config.go:309] "Starting node config controller"
	I1026 14:24:34.504673       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:24:34.504950       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:24:34.604503       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:24:34.604541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:24:34.604581       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19a1752a506d88561cdae6f5650a7735aee76b9c80cb77c00b71f15fcb4b082f] <==
	I1026 14:23:47.066125       1 serving.go:386] Generated self-signed cert in-memory
	I1026 14:23:48.746056       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 14:23:48.746084       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:23:48.762056       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 14:23:48.762345       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 14:23:48.762409       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 14:23:48.762466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 14:23:48.773832       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:23:48.780562       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:23:48.779868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 14:23:48.781071       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 14:23:48.862582       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 14:23:48.880883       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:23:48.881176       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 14:24:12.190279       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 14:24:12.190314       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 14:24:12.190335       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 14:24:12.190371       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 14:24:12.190391       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:24:12.190409       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1026 14:24:12.190753       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 14:24:12.190800       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2afe1f7ddd93065c2322f71b31ea9f61804b34685a3223bdee1cbee3dc14b979] <==
	I1026 14:24:31.355869       1 serving.go:386] Generated self-signed cert in-memory
	W1026 14:24:33.241243       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 14:24:33.241344       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 14:24:33.241378       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 14:24:33.241421       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 14:24:33.283443       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 14:24:33.283474       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:24:33.285759       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:24:33.285856       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:24:33.286461       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 14:24:33.286577       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 14:24:33.386325       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:35:03 functional-707472 kubelet[3813]: E1026 14:35:03.830763    3813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-49q5b" podUID="067f19bc-67f4-4787-9531-7dc6388a40d2"
	Oct 26 14:35:10 functional-707472 kubelet[3813]: I1026 14:35:10.751332    3813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswt6\" (UniqueName: \"kubernetes.io/projected/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-kube-api-access-fswt6\") pod \"busybox-mount\" (UID: \"4c0d9bb5-e998-4cda-a24e-5424e558dbc6\") " pod="default/busybox-mount"
	Oct 26 14:35:10 functional-707472 kubelet[3813]: I1026 14:35:10.751400    3813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-test-volume\") pod \"busybox-mount\" (UID: \"4c0d9bb5-e998-4cda-a24e-5424e558dbc6\") " pod="default/busybox-mount"
	Oct 26 14:35:10 functional-707472 kubelet[3813]: E1026 14:35:10.830694    3813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4rn47" podUID="cd0ca245-bd23-4a52-80f9-72d0febda5c1"
	Oct 26 14:35:10 functional-707472 kubelet[3813]: W1026 14:35:10.933196    3813 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d/crio-8ef7dbbdff41c3f7d06666c2ccbf1d6aad9d1e22fe64f78dd8a461ce92315eee WatchSource:0}: Error finding container 8ef7dbbdff41c3f7d06666c2ccbf1d6aad9d1e22fe64f78dd8a461ce92315eee: Status 404 returned error can't find the container with id 8ef7dbbdff41c3f7d06666c2ccbf1d6aad9d1e22fe64f78dd8a461ce92315eee
	Oct 26 14:35:14 functional-707472 kubelet[3813]: I1026 14:35:14.878241    3813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-test-volume\") pod \"4c0d9bb5-e998-4cda-a24e-5424e558dbc6\" (UID: \"4c0d9bb5-e998-4cda-a24e-5424e558dbc6\") "
	Oct 26 14:35:14 functional-707472 kubelet[3813]: I1026 14:35:14.878311    3813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fswt6\" (UniqueName: \"kubernetes.io/projected/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-kube-api-access-fswt6\") pod \"4c0d9bb5-e998-4cda-a24e-5424e558dbc6\" (UID: \"4c0d9bb5-e998-4cda-a24e-5424e558dbc6\") "
	Oct 26 14:35:14 functional-707472 kubelet[3813]: I1026 14:35:14.878834    3813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-test-volume" (OuterVolumeSpecName: "test-volume") pod "4c0d9bb5-e998-4cda-a24e-5424e558dbc6" (UID: "4c0d9bb5-e998-4cda-a24e-5424e558dbc6"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 26 14:35:14 functional-707472 kubelet[3813]: I1026 14:35:14.880636    3813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-kube-api-access-fswt6" (OuterVolumeSpecName: "kube-api-access-fswt6") pod "4c0d9bb5-e998-4cda-a24e-5424e558dbc6" (UID: "4c0d9bb5-e998-4cda-a24e-5424e558dbc6"). InnerVolumeSpecName "kube-api-access-fswt6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 14:35:14 functional-707472 kubelet[3813]: I1026 14:35:14.979001    3813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fswt6\" (UniqueName: \"kubernetes.io/projected/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-kube-api-access-fswt6\") on node \"functional-707472\" DevicePath \"\""
	Oct 26 14:35:14 functional-707472 kubelet[3813]: I1026 14:35:14.979042    3813 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4c0d9bb5-e998-4cda-a24e-5424e558dbc6-test-volume\") on node \"functional-707472\" DevicePath \"\""
	Oct 26 14:35:15 functional-707472 kubelet[3813]: I1026 14:35:15.706047    3813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef7dbbdff41c3f7d06666c2ccbf1d6aad9d1e22fe64f78dd8a461ce92315eee"
	Oct 26 14:35:15 functional-707472 kubelet[3813]: E1026 14:35:15.830842    3813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-49q5b" podUID="067f19bc-67f4-4787-9531-7dc6388a40d2"
	Oct 26 14:35:22 functional-707472 kubelet[3813]: I1026 14:35:22.147665    3813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e96a9f97-0179-4eac-bce3-c5e95535242b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zbpt6\" (UID: \"e96a9f97-0179-4eac-bce3-c5e95535242b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zbpt6"
	Oct 26 14:35:22 functional-707472 kubelet[3813]: I1026 14:35:22.148164    3813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzct5\" (UniqueName: \"kubernetes.io/projected/e96a9f97-0179-4eac-bce3-c5e95535242b-kube-api-access-gzct5\") pod \"kubernetes-dashboard-855c9754f9-zbpt6\" (UID: \"e96a9f97-0179-4eac-bce3-c5e95535242b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zbpt6"
	Oct 26 14:35:22 functional-707472 kubelet[3813]: I1026 14:35:22.249258    3813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5535d5da-4ebf-40b5-9d88-4b696f34ecef-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-f9b46\" (UID: \"5535d5da-4ebf-40b5-9d88-4b696f34ecef\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-f9b46"
	Oct 26 14:35:22 functional-707472 kubelet[3813]: I1026 14:35:22.249391    3813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqgf\" (UniqueName: \"kubernetes.io/projected/5535d5da-4ebf-40b5-9d88-4b696f34ecef-kube-api-access-cwqgf\") pod \"dashboard-metrics-scraper-77bf4d6c4c-f9b46\" (UID: \"5535d5da-4ebf-40b5-9d88-4b696f34ecef\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-f9b46"
	Oct 26 14:35:22 functional-707472 kubelet[3813]: W1026 14:35:22.434391    3813 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/281808eb03cdd750588d440bfec39eed68e4668427084b2b8daa769eb415806d/crio-63d2d2ad9ddae83b64ad1e9bbc7820ac0d1fc95e3e18ac44f86b22d8706d1e03 WatchSource:0}: Error finding container 63d2d2ad9ddae83b64ad1e9bbc7820ac0d1fc95e3e18ac44f86b22d8706d1e03: Status 404 returned error can't find the container with id 63d2d2ad9ddae83b64ad1e9bbc7820ac0d1fc95e3e18ac44f86b22d8706d1e03
	Oct 26 14:35:25 functional-707472 kubelet[3813]: E1026 14:35:25.831265    3813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4rn47" podUID="cd0ca245-bd23-4a52-80f9-72d0febda5c1"
	Oct 26 14:35:28 functional-707472 kubelet[3813]: I1026 14:35:28.780015    3813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zbpt6" podStartSLOduration=2.096324124 podStartE2EDuration="6.779995374s" podCreationTimestamp="2025-10-26 14:35:22 +0000 UTC" firstStartedPulling="2025-10-26 14:35:22.437224325 +0000 UTC m=+653.803165260" lastFinishedPulling="2025-10-26 14:35:27.120895575 +0000 UTC m=+658.486836510" observedRunningTime="2025-10-26 14:35:27.781858485 +0000 UTC m=+659.147799428" watchObservedRunningTime="2025-10-26 14:35:28.779995374 +0000 UTC m=+660.145936309"
	Oct 26 14:35:30 functional-707472 kubelet[3813]: E1026 14:35:30.831023    3813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-49q5b" podUID="067f19bc-67f4-4787-9531-7dc6388a40d2"
	Oct 26 14:35:36 functional-707472 kubelet[3813]: E1026 14:35:36.832481    3813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:35:36 functional-707472 kubelet[3813]: E1026 14:35:36.832522    3813 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:35:36 functional-707472 kubelet[3813]: E1026 14:35:36.832599    3813 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-4rn47_default(cd0ca245-bd23-4a52-80f9-72d0febda5c1): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 26 14:35:36 functional-707472 kubelet[3813]: E1026 14:35:36.832631    3813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-4rn47" podUID="cd0ca245-bd23-4a52-80f9-72d0febda5c1"
	
	
	==> kubernetes-dashboard [426ceeb1bb77a47ec799290a9b74201b5a740da3a55bf02efc4ecb4f664e158e] <==
	2025/10/26 14:35:27 Using namespace: kubernetes-dashboard
	2025/10/26 14:35:27 Using in-cluster config to connect to apiserver
	2025/10/26 14:35:27 Using secret token for csrf signing
	2025/10/26 14:35:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 14:35:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 14:35:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 14:35:27 Generating JWE encryption key
	2025/10/26 14:35:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 14:35:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 14:35:27 Initializing JWE encryption key from synchronized object
	2025/10/26 14:35:27 Creating in-cluster Sidecar client
	2025/10/26 14:35:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 14:35:27 Serving insecurely on HTTP port: 9090
	2025/10/26 14:35:27 Starting overwatch
	
	
	==> storage-provisioner [890988ab0c43332f4b04ee5df8cc091be6d57db1b749d4fceadf4d82d15e2e81] <==
	W1026 14:35:16.501234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:18.505073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:18.510450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:20.513905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:20.518787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:22.522901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:22.527773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:24.531177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:24.539224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:26.542661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:26.548422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:28.552442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:28.559735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:30.562402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:30.567359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:32.570805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:32.577150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:34.580409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:34.587876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:36.591143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:36.595564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:38.599186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:38.603797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:40.607893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:35:40.614082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a8ab3b1e9af14f7d0ed8053dc3a55e6d920f5736331e347f589dc54f67af27ca] <==
	I1026 14:23:44.449902       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 14:23:48.642883       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 14:23:48.642951       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 14:23:48.702800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:23:52.187644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:23:56.448290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:00.071022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:03.124952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:06.147485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:06.152539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 14:24:06.152716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 14:24:06.152876       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-707472_8bd16036-0fc2-4fa0-bddc-d3e464b27b23!
	I1026 14:24:06.153172       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3c01881-67d0-409f-98f0-1fe4961f0fe0", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-707472_8bd16036-0fc2-4fa0-bddc-d3e464b27b23 became leader
	W1026 14:24:06.161413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:06.164527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 14:24:06.253171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-707472_8bd16036-0fc2-4fa0-bddc-d3e464b27b23!
	W1026 14:24:08.168187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:08.177119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:10.180871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:10.187686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-707472 -n functional-707472
helpers_test.go:269: (dbg) Run:  kubectl --context functional-707472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-4rn47 hello-node-connect-7d85dfc575-49q5b
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-707472 describe pod busybox-mount hello-node-75c85bcc94-4rn47 hello-node-connect-7d85dfc575-49q5b
helpers_test.go:290: (dbg) kubectl --context functional-707472 describe pod busybox-mount hello-node-75c85bcc94-4rn47 hello-node-connect-7d85dfc575-49q5b:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-707472/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:35:10 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d27028f311430e8997443ba27f7ba63a27c2ca8bab6c4afbfe47adff562befca
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 26 Oct 2025 14:35:13 +0000
	      Finished:     Sun, 26 Oct 2025 14:35:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fswt6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fswt6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  31s   default-scheduler  Successfully assigned default/busybox-mount to functional-707472
	  Normal  Pulling    32s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     29s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.126s (2.126s including waiting). Image size: 3774172 bytes.
	  Normal  Created    29s   kubelet            Created container: mount-munger
	  Normal  Started    29s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-4rn47
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-707472/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:25:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4qhl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b4qhl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-4rn47 to functional-707472
	  Normal   Pulling    7m51s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m51s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m51s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    32s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     32s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-49q5b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-707472/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:25:38 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgg76 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bgg76:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-49q5b to functional-707472
	  Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image load --daemon kicbase/echo-server:functional-707472 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-707472" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image load --daemon kicbase/echo-server:functional-707472 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-707472" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-707472
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image load --daemon kicbase/echo-server:functional-707472 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-707472" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-707472 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-707472 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-4rn47" [cd0ca245-bd23-4a52-80f9-72d0febda5c1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-707472 -n functional-707472
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-26 14:35:01.023803872 +0000 UTC m=+1232.270528591
functional_test.go:1460: (dbg) Run:  kubectl --context functional-707472 describe po hello-node-75c85bcc94-4rn47 -n default
functional_test.go:1460: (dbg) kubectl --context functional-707472 describe po hello-node-75c85bcc94-4rn47 -n default:
Name:             hello-node-75c85bcc94-4rn47
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-707472/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:25:00 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4qhl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-b4qhl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-4rn47 to functional-707472
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-707472 logs hello-node-75c85bcc94-4rn47 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-707472 logs hello-node-75c85bcc94-4rn47 -n default: exit status 1 (113.114686ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-4rn47" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-707472 logs hello-node-75c85bcc94-4rn47 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image save kicbase/echo-server:functional-707472 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1026 14:25:02.410073  739040 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:25:02.410302  739040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:25:02.410314  739040 out.go:374] Setting ErrFile to fd 2...
	I1026 14:25:02.410319  739040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:25:02.410594  739040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:25:02.411275  739040 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:25:02.411399  739040 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:25:02.411864  739040 cli_runner.go:164] Run: docker container inspect functional-707472 --format={{.State.Status}}
	I1026 14:25:02.430973  739040 ssh_runner.go:195] Run: systemctl --version
	I1026 14:25:02.431043  739040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-707472
	I1026 14:25:02.449379  739040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/functional-707472/id_rsa Username:docker}
	I1026 14:25:02.555522  739040 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1026 14:25:02.555589  739040 cache_images.go:254] Failed to load cached images for "functional-707472": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1026 14:25:02.555613  739040 cache_images.go:266] failed pushing to: functional-707472

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-707472
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image save --daemon kicbase/echo-server:functional-707472 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-707472
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-707472: exit status 1 (21.316523ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-707472

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-707472

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 service --namespace=default --https --url hello-node: exit status 115 (405.401748ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31580
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-707472 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 service hello-node --url --format={{.IP}}: exit status 115 (412.012396ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-707472 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 service hello-node --url: exit status 115 (397.695065ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31580
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-707472 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31580
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.43s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-372963 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-372963 --output=json --user=testUser: exit status 80 (2.43162396s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7bbc1a98-dd9c-462b-9856-e72b62497de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-372963 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"56493f7f-6a8d-462c-9c56-86a711dc64d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T14:48:32Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2c535264-102a-4292-8ca4-ebbecf99e04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-372963 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-372963 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-372963 --output=json --user=testUser: exit status 80 (2.059037519s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f27e868-7842-49bf-83c9-83ab3035365a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-372963 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a33615e5-d0fd-42cb-9371-8f715a17a784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T14:48:34Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"5362a35c-f9b5-477b-9179-1cc43fb297f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-372963 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.06s)

                                                
                                    
x
+
TestPause/serial/Pause (8.42s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-013921 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-013921 --alsologtostderr -v=5: exit status 80 (2.056216487s)

                                                
                                                
-- stdout --
	* Pausing node pause-013921 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:11:02.725754  878940 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:11:02.726876  878940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:11:02.726919  878940 out.go:374] Setting ErrFile to fd 2...
	I1026 15:11:02.726938  878940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:11:02.727246  878940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:11:02.727576  878940 out.go:368] Setting JSON to false
	I1026 15:11:02.727631  878940 mustload.go:65] Loading cluster: pause-013921
	I1026 15:11:02.728843  878940 config.go:182] Loaded profile config "pause-013921": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:11:02.729402  878940 cli_runner.go:164] Run: docker container inspect pause-013921 --format={{.State.Status}}
	I1026 15:11:02.748283  878940 host.go:66] Checking if "pause-013921" exists ...
	I1026 15:11:02.748640  878940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:11:02.842060  878940 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-26 15:11:02.831691976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:11:02.842722  878940 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-013921 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:11:02.847898  878940 out.go:179] * Pausing node pause-013921 ... 
	I1026 15:11:02.852061  878940 host.go:66] Checking if "pause-013921" exists ...
	I1026 15:11:02.852439  878940 ssh_runner.go:195] Run: systemctl --version
	I1026 15:11:02.852497  878940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-013921
	I1026 15:11:02.895749  878940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/pause-013921/id_rsa Username:docker}
	I1026 15:11:02.999777  878940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:03.015716  878940 pause.go:52] kubelet running: true
	I1026 15:11:03.015788  878940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:11:03.262363  878940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:11:03.262452  878940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:11:03.363539  878940 cri.go:89] found id: "abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a"
	I1026 15:11:03.363565  878940 cri.go:89] found id: "5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb"
	I1026 15:11:03.363571  878940 cri.go:89] found id: "398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b"
	I1026 15:11:03.363575  878940 cri.go:89] found id: "914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4"
	I1026 15:11:03.363578  878940 cri.go:89] found id: "e13e4a5e5087386a946a8787e9c00aa3b98d692a57e746f4c139d1959a1fd662"
	I1026 15:11:03.363581  878940 cri.go:89] found id: "35849e728c577734171fde5d429ba0916a28decf50dbf44d86f32de2593e312a"
	I1026 15:11:03.363584  878940 cri.go:89] found id: "eb9ec360c79c17be93409d1ec23c1e93ef3d42937f60d3e49f44310a90b0756a"
	I1026 15:11:03.363587  878940 cri.go:89] found id: "37abefbe208d829b48a5663f6cc3a302e6cfa8ca844a962be1b40bc04483726e"
	I1026 15:11:03.363591  878940 cri.go:89] found id: "0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189"
	I1026 15:11:03.363597  878940 cri.go:89] found id: "704aaf411be7483ceb755139de98cf5d875037d632ffe53352eef60c60e3b1f6"
	I1026 15:11:03.363600  878940 cri.go:89] found id: "670a5333bcde4a85128035b6a2625af2679daaf9a076613625fe8fb0dafef960"
	I1026 15:11:03.363603  878940 cri.go:89] found id: "7d264f78e16f6ebb0288181f1d512d12ee03dbff850c905df2ffb045c57b4da6"
	I1026 15:11:03.363606  878940 cri.go:89] found id: "e6a305cf0786e2aad36171947c363b09df82a1e38658dcc14e6d059f4e70bde9"
	I1026 15:11:03.363609  878940 cri.go:89] found id: "6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765"
	I1026 15:11:03.363612  878940 cri.go:89] found id: ""
	I1026 15:11:03.363661  878940 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:11:03.378348  878940 retry.go:31] will retry after 247.134135ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:11:03Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:11:03.625741  878940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:03.641922  878940 pause.go:52] kubelet running: false
	I1026 15:11:03.641985  878940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:11:03.858684  878940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:11:03.858764  878940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:11:03.963208  878940 cri.go:89] found id: "abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a"
	I1026 15:11:03.963230  878940 cri.go:89] found id: "5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb"
	I1026 15:11:03.963235  878940 cri.go:89] found id: "398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b"
	I1026 15:11:03.963239  878940 cri.go:89] found id: "914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4"
	I1026 15:11:03.963243  878940 cri.go:89] found id: "e13e4a5e5087386a946a8787e9c00aa3b98d692a57e746f4c139d1959a1fd662"
	I1026 15:11:03.963247  878940 cri.go:89] found id: "35849e728c577734171fde5d429ba0916a28decf50dbf44d86f32de2593e312a"
	I1026 15:11:03.963250  878940 cri.go:89] found id: "eb9ec360c79c17be93409d1ec23c1e93ef3d42937f60d3e49f44310a90b0756a"
	I1026 15:11:03.963253  878940 cri.go:89] found id: "37abefbe208d829b48a5663f6cc3a302e6cfa8ca844a962be1b40bc04483726e"
	I1026 15:11:03.963256  878940 cri.go:89] found id: "0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189"
	I1026 15:11:03.963262  878940 cri.go:89] found id: "704aaf411be7483ceb755139de98cf5d875037d632ffe53352eef60c60e3b1f6"
	I1026 15:11:03.963270  878940 cri.go:89] found id: "670a5333bcde4a85128035b6a2625af2679daaf9a076613625fe8fb0dafef960"
	I1026 15:11:03.963273  878940 cri.go:89] found id: "7d264f78e16f6ebb0288181f1d512d12ee03dbff850c905df2ffb045c57b4da6"
	I1026 15:11:03.963277  878940 cri.go:89] found id: "e6a305cf0786e2aad36171947c363b09df82a1e38658dcc14e6d059f4e70bde9"
	I1026 15:11:03.963280  878940 cri.go:89] found id: "6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765"
	I1026 15:11:03.963283  878940 cri.go:89] found id: ""
	I1026 15:11:03.963341  878940 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:11:03.979335  878940 retry.go:31] will retry after 319.102978ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:11:03Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:11:04.298826  878940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:04.313591  878940 pause.go:52] kubelet running: false
	I1026 15:11:04.313701  878940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:11:04.496322  878940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:11:04.496477  878940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:11:04.590309  878940 cri.go:89] found id: "abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a"
	I1026 15:11:04.590386  878940 cri.go:89] found id: "5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb"
	I1026 15:11:04.590403  878940 cri.go:89] found id: "398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b"
	I1026 15:11:04.590420  878940 cri.go:89] found id: "914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4"
	I1026 15:11:04.590452  878940 cri.go:89] found id: "e13e4a5e5087386a946a8787e9c00aa3b98d692a57e746f4c139d1959a1fd662"
	I1026 15:11:04.590473  878940 cri.go:89] found id: "35849e728c577734171fde5d429ba0916a28decf50dbf44d86f32de2593e312a"
	I1026 15:11:04.590490  878940 cri.go:89] found id: "eb9ec360c79c17be93409d1ec23c1e93ef3d42937f60d3e49f44310a90b0756a"
	I1026 15:11:04.590507  878940 cri.go:89] found id: "37abefbe208d829b48a5663f6cc3a302e6cfa8ca844a962be1b40bc04483726e"
	I1026 15:11:04.590524  878940 cri.go:89] found id: "0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189"
	I1026 15:11:04.590553  878940 cri.go:89] found id: "704aaf411be7483ceb755139de98cf5d875037d632ffe53352eef60c60e3b1f6"
	I1026 15:11:04.590574  878940 cri.go:89] found id: "670a5333bcde4a85128035b6a2625af2679daaf9a076613625fe8fb0dafef960"
	I1026 15:11:04.590592  878940 cri.go:89] found id: "7d264f78e16f6ebb0288181f1d512d12ee03dbff850c905df2ffb045c57b4da6"
	I1026 15:11:04.590608  878940 cri.go:89] found id: "e6a305cf0786e2aad36171947c363b09df82a1e38658dcc14e6d059f4e70bde9"
	I1026 15:11:04.590629  878940 cri.go:89] found id: "6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765"
	I1026 15:11:04.590654  878940 cri.go:89] found id: ""
	I1026 15:11:04.590720  878940 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:11:04.636150  878940 out.go:203] 
	W1026 15:11:04.649663  878940 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:11:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:11:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:11:04.649692  878940 out.go:285] * 
	* 
	W1026 15:11:04.694815  878940 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:11:04.699900  878940 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-013921 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-013921
helpers_test.go:243: (dbg) docker inspect pause-013921:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a",
	        "Created": "2025-10-26T15:09:12.659424926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 870090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:09:12.728946094Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/hosts",
	        "LogPath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a-json.log",
	        "Name": "/pause-013921",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-013921:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-013921",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a",
	                "LowerDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-013921",
	                "Source": "/var/lib/docker/volumes/pause-013921/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-013921",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-013921",
	                "name.minikube.sigs.k8s.io": "pause-013921",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89d54c1044fb696cdb9d8c710562eeb3929ad6d4088547cf64bed6f04dae8230",
	            "SandboxKey": "/var/run/docker/netns/89d54c1044fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-013921": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:38:99:d1:28:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac56ab308ea765edb4363917bce02a9ff3d550774ece55d26f6010b394f891fb",
	                    "EndpointID": "a423f00ed9f58dec3fe84a0b88e0bbf07194c005056eb78432395fc45c637be7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-013921",
	                        "9c1a31e1575f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-013921 -n pause-013921
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-013921 -n pause-013921: exit status 2 (358.061491ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-013921 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-013921 logs -n 25: (2.272400465s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-337407 sudo systemctl cat kubelet --no-pager                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status docker --all --full --no-pager                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat docker --no-pager                                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/docker/daemon.json                                                          │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo docker system info                                                                   │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cri-dockerd --version                                                                │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat containerd --no-pager                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/containerd/config.toml                                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo containerd config dump                                                               │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status crio --all --full --no-pager                                        │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat crio --no-pager                                                        │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo crio config                                                                          │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p cilium-337407                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ pause   │ -p pause-013921 --alsologtostderr -v=5                                                                     │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:11:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:11:00.637188  878634 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:11:00.637394  878634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:11:00.637416  878634 out.go:374] Setting ErrFile to fd 2...
	I1026 15:11:00.637437  878634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:11:00.637767  878634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:11:00.638214  878634 out.go:368] Setting JSON to false
	I1026 15:11:00.639282  878634 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17613,"bootTime":1761473848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:11:00.639386  878634 start.go:141] virtualization:  
	I1026 15:11:00.642811  878634 out.go:179] * [force-systemd-env-969063] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:11:00.646608  878634 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:11:00.646673  878634 notify.go:220] Checking for updates...
	I1026 15:11:00.649632  878634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:11:00.652570  878634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:11:00.655524  878634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:11:00.658498  878634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:11:00.661477  878634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1026 15:11:00.665056  878634 config.go:182] Loaded profile config "pause-013921": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:11:00.665176  878634 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:11:00.690335  878634 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:11:00.690461  878634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:11:00.755487  878634 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:11:00.74554968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:11:00.755607  878634 docker.go:318] overlay module found
	I1026 15:11:00.758865  878634 out.go:179] * Using the docker driver based on user configuration
	I1026 15:11:00.761739  878634 start.go:305] selected driver: docker
	I1026 15:11:00.761754  878634 start.go:925] validating driver "docker" against <nil>
	I1026 15:11:00.761768  878634 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:11:00.762527  878634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:11:00.819931  878634 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:11:00.810324028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:11:00.820099  878634 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:11:00.820336  878634 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 15:11:00.823390  878634 out.go:179] * Using Docker driver with root privileges
	I1026 15:11:00.826247  878634 cni.go:84] Creating CNI manager for ""
	I1026 15:11:00.826319  878634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:11:00.826332  878634 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:11:00.826423  878634 start.go:349] cluster config:
	{Name:force-systemd-env-969063 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-969063 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:11:00.829616  878634 out.go:179] * Starting "force-systemd-env-969063" primary control-plane node in "force-systemd-env-969063" cluster
	I1026 15:11:00.832462  878634 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:11:00.835389  878634 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:11:00.838222  878634 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:11:00.838285  878634 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:11:00.838298  878634 cache.go:58] Caching tarball of preloaded images
	I1026 15:11:00.838312  878634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:11:00.838381  878634 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:11:00.838391  878634 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:11:00.838500  878634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/force-systemd-env-969063/config.json ...
	I1026 15:11:00.838523  878634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/force-systemd-env-969063/config.json: {Name:mk0645029bc7abde238533bd999197e2691e01a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:11:00.858023  878634 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:11:00.858050  878634 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:11:00.858064  878634 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:11:00.858096  878634 start.go:360] acquireMachinesLock for force-systemd-env-969063: {Name:mk0a8b274aacb71c750bf4fb27f4bdfea5670c13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:11:00.858202  878634 start.go:364] duration metric: took 89.568µs to acquireMachinesLock for "force-systemd-env-969063"
	I1026 15:11:00.858233  878634 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-969063 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-969063 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:11:00.858299  878634 start.go:125] createHost starting for "" (driver="docker")
	W1026 15:10:58.864675  876003 pod_ready.go:104] pod "kube-apiserver-pause-013921" is not "Ready", error: <nil>
	I1026 15:11:00.367627  876003 pod_ready.go:94] pod "kube-apiserver-pause-013921" is "Ready"
	I1026 15:11:00.367657  876003 pod_ready.go:86] duration metric: took 3.509126086s for pod "kube-apiserver-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:00.371301  876003 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.378485  876003 pod_ready.go:94] pod "kube-controller-manager-pause-013921" is "Ready"
	I1026 15:11:02.378569  876003 pod_ready.go:86] duration metric: took 2.007241545s for pod "kube-controller-manager-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.381658  876003 pod_ready.go:83] waiting for pod "kube-proxy-wgqtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.387274  876003 pod_ready.go:94] pod "kube-proxy-wgqtw" is "Ready"
	I1026 15:11:02.387297  876003 pod_ready.go:86] duration metric: took 5.615071ms for pod "kube-proxy-wgqtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.390137  876003 pod_ready.go:83] waiting for pod "kube-scheduler-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.542931  876003 pod_ready.go:94] pod "kube-scheduler-pause-013921" is "Ready"
	I1026 15:11:02.542961  876003 pod_ready.go:86] duration metric: took 152.75055ms for pod "kube-scheduler-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.542976  876003 pod_ready.go:40] duration metric: took 9.266575087s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:02.604030  876003 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:11:02.607596  876003 out.go:179] * Done! kubectl is now configured to use "pause-013921" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.275673634Z" level=info msg="Created container abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a: kube-system/kube-apiserver-pause-013921/kube-apiserver" id=a84bf22b-d3fb-4784-8bf8-c658b7cc7187 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.276679545Z" level=info msg="Starting container: 5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb" id=01e4f056-a069-47f1-afe8-4692cfe93e6a name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.278703043Z" level=info msg="Created container 914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4: kube-system/kindnet-kp4sz/kindnet-cni" id=8dd7f813-a366-42f7-b808-08c11e3ecbbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.282837211Z" level=info msg="Starting container: 398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b" id=fcb1ed8d-3fb7-4dec-af0b-0d15b174e985 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.284864894Z" level=info msg="Starting container: abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a" id=3dac5819-f992-42bd-84b5-9f664b6d3bbd name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.287174664Z" level=info msg="Started container" PID=2320 containerID=398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b description=kube-system/etcd-pause-013921/etcd id=fcb1ed8d-3fb7-4dec-af0b-0d15b174e985 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edf8ce0e1d2b4b0b7280cc980e4fe7d2a5dba0ad72ab00142970446e399f98cc
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.288587726Z" level=info msg="Started container" PID=2316 containerID=5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb description=kube-system/kube-scheduler-pause-013921/kube-scheduler id=01e4f056-a069-47f1-afe8-4692cfe93e6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2990e8dd0973ccd6ee48a1287e63d86c244251c0fcdad5925c3f3eb556eb6e99
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.303563502Z" level=info msg="Started container" PID=2323 containerID=abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a description=kube-system/kube-apiserver-pause-013921/kube-apiserver id=3dac5819-f992-42bd-84b5-9f664b6d3bbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5895f9361120e9b34523d661eed9e740b377a50880b7198caf0d9bb8d0c1e79
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.306028219Z" level=info msg="Starting container: 914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4" id=1d6e3676-b4aa-48c5-8627-ceabdc7eebb7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.313856296Z" level=info msg="Started container" PID=2318 containerID=914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4 description=kube-system/kindnet-kp4sz/kindnet-cni id=1d6e3676-b4aa-48c5-8627-ceabdc7eebb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=575af721feba4784a6f58f490d6cc1bc9b59cf56cfee964d23dc3651b2821cee
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.755361741Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.759595824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.759758123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.75983759Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.76690213Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.766946135Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.766970677Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.771591307Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.771759843Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.771833764Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.775518302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.775674119Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.775748885Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.781050426Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.78122651Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	abdc643b1629a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   a5895f9361120       kube-apiserver-pause-013921            kube-system
	5a01a6c0046b0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   2990e8dd0973c       kube-scheduler-pause-013921            kube-system
	398230665f3c2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   edf8ce0e1d2b4       etcd-pause-013921                      kube-system
	914737e35df86       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   575af721feba4       kindnet-kp4sz                          kube-system
	e13e4a5e50873       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   24a269c2acf3f       kube-controller-manager-pause-013921   kube-system
	35849e728c577       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   736b017f96663       kube-proxy-wgqtw                       kube-system
	eb9ec360c79c1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   aa7a4ab7085d3       coredns-66bc5c9577-m4gxc               kube-system
	37abefbe208d8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   aa7a4ab7085d3       coredns-66bc5c9577-m4gxc               kube-system
	0e92ae20df55d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   575af721feba4       kindnet-kp4sz                          kube-system
	704aaf411be74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   736b017f96663       kube-proxy-wgqtw                       kube-system
	670a5333bcde4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   2990e8dd0973c       kube-scheduler-pause-013921            kube-system
	7d264f78e16f6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   24a269c2acf3f       kube-controller-manager-pause-013921   kube-system
	e6a305cf0786e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   a5895f9361120       kube-apiserver-pause-013921            kube-system
	6a188fa34356e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   edf8ce0e1d2b4       etcd-pause-013921                      kube-system
	
	
	==> coredns [37abefbe208d829b48a5663f6cc3a302e6cfa8ca844a962be1b40bc04483726e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46252 - 14709 "HINFO IN 8155846183561583170.8481793397817331275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056001713s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb9ec360c79c17be93409d1ec23c1e93ef3d42937f60d3e49f44310a90b0756a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36946 - 35190 "HINFO IN 246363323977596506.3270707713582301445. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030286391s
	
	
	==> describe nodes <==
	Name:               pause-013921
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-013921
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=pause-013921
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_09_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:09:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-013921
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:11:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:09:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:09:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:09:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:10:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-013921
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                818d2ec3-5a20-4cb5-90d8-6e7470e58faf
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-m4gxc                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-013921                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-kp4sz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-013921             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-013921    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-wgqtw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-013921             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 76s   kube-proxy       
	  Normal   Starting                 13s   kube-proxy       
	  Normal   Starting                 83s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s   kubelet          Node pause-013921 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s   kubelet          Node pause-013921 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s   kubelet          Node pause-013921 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s   node-controller  Node pause-013921 event: Registered Node pause-013921 in Controller
	  Normal   NodeReady                36s   kubelet          Node pause-013921 status is now: NodeReady
	  Normal   RegisteredNode           11s   node-controller  Node pause-013921 event: Registered Node pause-013921 in Controller
	
	
	==> dmesg <==
	[Oct26 14:44] overlayfs: idmapped layers are currently not supported
	[Oct26 14:45] overlayfs: idmapped layers are currently not supported
	[  +3.305180] overlayfs: idmapped layers are currently not supported
	[ +47.970712] overlayfs: idmapped layers are currently not supported
	[Oct26 14:46] overlayfs: idmapped layers are currently not supported
	[Oct26 14:47] overlayfs: idmapped layers are currently not supported
	[Oct26 14:52] overlayfs: idmapped layers are currently not supported
	[Oct26 14:53] overlayfs: idmapped layers are currently not supported
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b] <==
	{"level":"warn","ts":"2025-10-26T15:10:50.332201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.366162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.440027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.453128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.469725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.502086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.527141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.556817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.576681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.598558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.626865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.645874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.670753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.682969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.706277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.727334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.758849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.776768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.797131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.823110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.843167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.868597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.883777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.900340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.995069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	
	
	==> etcd [6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765] <==
	{"level":"warn","ts":"2025-10-26T15:09:39.826707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.842839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.878094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.931696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.949283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.963955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:40.040141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:10:34.962702Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T15:10:34.962763Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-013921","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-26T15:10:34.962864Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T15:10:35.148144Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T15:10:35.148241Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T15:10:35.148262Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-26T15:10:35.148367Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T15:10:35.148388Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148757Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148800Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T15:10:35.148809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148895Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148910Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T15:10:35.148917Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T15:10:35.151825Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-26T15:10:35.151925Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T15:10:35.151971Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:10:35.151978Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-013921","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 15:11:06 up  4:53,  0 user,  load average: 4.29, 3.95, 2.87
	Linux pause-013921 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189] <==
	I1026 15:09:49.434538       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:09:49.436552       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:09:49.437401       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:09:49.437429       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:09:49.437445       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:09:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:09:49.649647       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:09:49.649679       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:09:49.649688       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:09:49.650411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:10:19.650612       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:10:19.650809       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:10:19.651028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:10:19.651158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:10:21.049862       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:10:21.049927       1 metrics.go:72] Registering metrics
	I1026 15:10:21.050000       1 controller.go:711] "Syncing nftables rules"
	I1026 15:10:29.649906       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:10:29.649979       1 main.go:301] handling current node
	
	
	==> kindnet [914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4] <==
	I1026 15:10:44.504426       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:10:44.535837       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:10:44.535985       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:10:44.535998       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:10:44.536012       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:10:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:10:44.765235       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:10:44.765333       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:10:44.765367       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:10:44.769540       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:10:52.279340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:10:52.279543       1 metrics.go:72] Registering metrics
	I1026 15:10:52.279636       1 controller.go:711] "Syncing nftables rules"
	I1026 15:10:54.754974       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:10:54.755065       1 main.go:301] handling current node
	I1026 15:11:04.756419       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:11:04.756473       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a] <==
	I1026 15:10:52.237208       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 15:10:52.237858       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:10:52.238891       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:10:52.239130       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:10:52.239266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:10:52.244691       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:10:52.251203       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:10:52.251271       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:10:52.251395       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:10:52.252016       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:10:52.252050       1 policy_source.go:240] refreshing policies
	I1026 15:10:52.252241       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:10:52.288208       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:10:52.317463       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:10:52.317562       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:10:52.317596       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:10:52.310000       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:10:52.317440       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1026 15:10:52.354824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:10:52.739434       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:10:54.153862       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:10:55.552083       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:10:55.754380       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:10:55.805752       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:10:55.905431       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [e6a305cf0786e2aad36171947c363b09df82a1e38658dcc14e6d059f4e70bde9] <==
	W1026 15:10:35.003755       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.003854       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.003955       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004079       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004177       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004290       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004422       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004098       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008227       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008480       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008655       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008847       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008955       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009057       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009252       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009406       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009493       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009600       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009697       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009697       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009795       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009862       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009949       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.010049       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.011985       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7d264f78e16f6ebb0288181f1d512d12ee03dbff850c905df2ffb045c57b4da6] <==
	I1026 15:09:48.062255       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:09:48.062881       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:09:48.063607       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:09:48.063674       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:09:48.064340       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:09:48.064491       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:09:48.064541       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:09:48.064613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:09:48.067113       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:09:48.067156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:09:48.067551       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:09:48.067766       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:09:48.067775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:09:48.067780       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:09:48.067838       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:09:48.067853       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:09:48.072042       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:09:48.076184       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:09:48.076233       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:09:48.076255       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:09:48.076260       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:09:48.076265       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:09:48.078069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:09:48.090335       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-013921" podCIDRs=["10.244.0.0/24"]
	I1026 15:10:33.118982       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [e13e4a5e5087386a946a8787e9c00aa3b98d692a57e746f4c139d1959a1fd662] <==
	I1026 15:10:55.549779       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:10:55.549974       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:10:55.552223       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:10:55.552369       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:10:55.552540       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-013921"
	I1026 15:10:55.553061       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:10:55.555466       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:10:55.562074       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:10:55.565403       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:10:55.566644       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:10:55.577116       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:10:55.582412       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:10:55.590740       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:10:55.590836       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:10:55.590870       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:10:55.596059       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:10:55.596158       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:10:55.596177       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:10:55.596186       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:10:55.596868       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:10:55.602483       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:10:55.602617       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:10:55.604836       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:10:55.606532       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:10:55.618663       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [35849e728c577734171fde5d429ba0916a28decf50dbf44d86f32de2593e312a] <==
	I1026 15:10:44.199836       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:10:46.429913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:10:52.354391       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:10:52.354494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:10:52.354583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:10:52.497134       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:10:52.497254       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:10:52.527754       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:10:52.528310       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:10:52.528597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:10:52.530736       1 config.go:200] "Starting service config controller"
	I1026 15:10:52.530812       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:10:52.530856       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:10:52.530903       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:10:52.530941       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:10:52.530976       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:10:52.532729       1 config.go:309] "Starting node config controller"
	I1026 15:10:52.533083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:10:52.533132       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:10:52.631633       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:10:52.632194       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:10:52.632223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [704aaf411be7483ceb755139de98cf5d875037d632ffe53352eef60c60e3b1f6] <==
	I1026 15:09:49.455589       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:09:49.542482       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:09:49.643052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:09:49.643088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:09:49.643175       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:09:49.738506       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:09:49.738557       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:09:49.792607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:09:49.834564       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:09:49.834599       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:09:49.840468       1 config.go:200] "Starting service config controller"
	I1026 15:09:49.840494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:09:49.840519       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:09:49.840523       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:09:49.840536       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:09:49.840540       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:09:49.849047       1 config.go:309] "Starting node config controller"
	I1026 15:09:49.849073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:09:49.849098       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:09:49.942046       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:09:49.942146       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:09:49.942453       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb] <==
	I1026 15:10:49.607903       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:10:52.137223       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:10:52.137341       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:10:52.137376       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:10:52.137428       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:10:52.281607       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:10:52.281647       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:10:52.284178       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:10:52.284619       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:52.284691       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:52.284748       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:10:52.385635       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [670a5333bcde4a85128035b6a2625af2679daaf9a076613625fe8fb0dafef960] <==
	E1026 15:09:41.632779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:09:41.632895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:09:41.632995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:09:41.633092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:09:41.633224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:09:41.633279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:09:41.633352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:09:41.634351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:09:41.634371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:09:41.634426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:09:41.634473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:09:41.634515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:09:41.634556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:09:41.634624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:09:41.634634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:09:41.634802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:09:41.635285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:09:41.635397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 15:09:42.918772       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:34.986952       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 15:10:34.986979       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 15:10:34.987012       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 15:10:34.987037       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:34.987184       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 15:10:34.987200       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 26 15:10:43 pause-013921 kubelet[1318]: I1026 15:10:43.954188    1318 scope.go:117] "RemoveContainer" containerID="6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.954745    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-m4gxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="835b87e8-794b-4354-b47d-a7a0a01bb07a" pod="kube-system/coredns-66bc5c9577-m4gxc"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.954948    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc48965d5ca0f8aa2a1e2f968dd258dd" pod="kube-system/kube-apiserver-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.955101    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1d424c30a8042d1f3df678c28b12424d" pod="kube-system/kube-controller-manager-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.955916    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ee84abc68093979c30db64ad57962b9" pod="kube-system/kube-scheduler-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.956215    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="241995cde8a23636ea25b16fc26570bf" pod="kube-system/etcd-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.956458    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wgqtw\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba6397bd-2e60-4dda-8795-8b9077a43fac" pod="kube-system/kube-proxy-wgqtw"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: I1026 15:10:43.990690    1318 scope.go:117] "RemoveContainer" containerID="0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991294    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-kp4sz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="54ccaa27-d65d-4e68-9e16-5e62d7de2cc7" pod="kube-system/kindnet-kp4sz"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991472    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-m4gxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="835b87e8-794b-4354-b47d-a7a0a01bb07a" pod="kube-system/coredns-66bc5c9577-m4gxc"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991654    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc48965d5ca0f8aa2a1e2f968dd258dd" pod="kube-system/kube-apiserver-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991818    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1d424c30a8042d1f3df678c28b12424d" pod="kube-system/kube-controller-manager-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991980    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ee84abc68093979c30db64ad57962b9" pod="kube-system/kube-scheduler-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.992152    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="241995cde8a23636ea25b16fc26570bf" pod="kube-system/etcd-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.992298    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wgqtw\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba6397bd-2e60-4dda-8795-8b9077a43fac" pod="kube-system/kube-proxy-wgqtw"
	Oct 26 15:10:51 pause-013921 kubelet[1318]: E1026 15:10:51.983260    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-013921\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 26 15:10:51 pause-013921 kubelet[1318]: E1026 15:10:51.983305    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-013921\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 26 15:10:51 pause-013921 kubelet[1318]: E1026 15:10:51.989012    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-kp4sz\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="54ccaa27-d65d-4e68-9e16-5e62d7de2cc7" pod="kube-system/kindnet-kp4sz"
	Oct 26 15:10:52 pause-013921 kubelet[1318]: E1026 15:10:52.056338    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-m4gxc\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="835b87e8-794b-4354-b47d-a7a0a01bb07a" pod="kube-system/coredns-66bc5c9577-m4gxc"
	Oct 26 15:10:52 pause-013921 kubelet[1318]: E1026 15:10:52.127619    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-013921\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="cc48965d5ca0f8aa2a1e2f968dd258dd" pod="kube-system/kube-apiserver-pause-013921"
	Oct 26 15:10:52 pause-013921 kubelet[1318]: E1026 15:10:52.178334    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-013921\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="1d424c30a8042d1f3df678c28b12424d" pod="kube-system/kube-controller-manager-pause-013921"
	Oct 26 15:10:53 pause-013921 kubelet[1318]: W1026 15:10:53.996724    1318 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 26 15:11:03 pause-013921 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:11:03 pause-013921 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:11:03 pause-013921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-013921 -n pause-013921
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-013921 -n pause-013921: exit status 2 (708.443541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-013921 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-013921
helpers_test.go:243: (dbg) docker inspect pause-013921:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a",
	        "Created": "2025-10-26T15:09:12.659424926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 870090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:09:12.728946094Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/hosts",
	        "LogPath": "/var/lib/docker/containers/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a/9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a-json.log",
	        "Name": "/pause-013921",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-013921:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-013921",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c1a31e1575f55dbe234b100f45776d5882a5ade1bfbd4d82da7ce9d555c075a",
	                "LowerDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52ea94dca7ac28ab24574d5df01628a09df44406be9f952548a391d5fd98fee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-013921",
	                "Source": "/var/lib/docker/volumes/pause-013921/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-013921",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-013921",
	                "name.minikube.sigs.k8s.io": "pause-013921",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89d54c1044fb696cdb9d8c710562eeb3929ad6d4088547cf64bed6f04dae8230",
	            "SandboxKey": "/var/run/docker/netns/89d54c1044fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-013921": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:38:99:d1:28:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac56ab308ea765edb4363917bce02a9ff3d550774ece55d26f6010b394f891fb",
	                    "EndpointID": "a423f00ed9f58dec3fe84a0b88e0bbf07194c005056eb78432395fc45c637be7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-013921",
	                        "9c1a31e1575f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-013921 -n pause-013921
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-013921 -n pause-013921: exit status 2 (496.032729ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-013921 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-013921 logs -n 25: (1.667115478s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-337407 sudo systemctl cat kubelet --no-pager                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status docker --all --full --no-pager                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat docker --no-pager                                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/docker/daemon.json                                                          │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo docker system info                                                                   │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cri-dockerd --version                                                                │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat containerd --no-pager                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/containerd/config.toml                                                      │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo containerd config dump                                                               │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status crio --all --full --no-pager                                        │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat crio --no-pager                                                        │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo crio config                                                                          │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p cilium-337407                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ pause   │ -p pause-013921 --alsologtostderr -v=5                                                                     │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:11:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:11:00.637188  878634 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:11:00.637394  878634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:11:00.637416  878634 out.go:374] Setting ErrFile to fd 2...
	I1026 15:11:00.637437  878634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:11:00.637767  878634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:11:00.638214  878634 out.go:368] Setting JSON to false
	I1026 15:11:00.639282  878634 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17613,"bootTime":1761473848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:11:00.639386  878634 start.go:141] virtualization:  
	I1026 15:11:00.642811  878634 out.go:179] * [force-systemd-env-969063] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:11:00.646608  878634 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:11:00.646673  878634 notify.go:220] Checking for updates...
	I1026 15:11:00.649632  878634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:11:00.652570  878634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:11:00.655524  878634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:11:00.658498  878634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:11:00.661477  878634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1026 15:11:00.665056  878634 config.go:182] Loaded profile config "pause-013921": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:11:00.665176  878634 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:11:00.690335  878634 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:11:00.690461  878634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:11:00.755487  878634 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:11:00.74554968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:11:00.755607  878634 docker.go:318] overlay module found
	I1026 15:11:00.758865  878634 out.go:179] * Using the docker driver based on user configuration
	I1026 15:11:00.761739  878634 start.go:305] selected driver: docker
	I1026 15:11:00.761754  878634 start.go:925] validating driver "docker" against <nil>
	I1026 15:11:00.761768  878634 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:11:00.762527  878634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:11:00.819931  878634 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:11:00.810324028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:11:00.820099  878634 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:11:00.820336  878634 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 15:11:00.823390  878634 out.go:179] * Using Docker driver with root privileges
	I1026 15:11:00.826247  878634 cni.go:84] Creating CNI manager for ""
	I1026 15:11:00.826319  878634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:11:00.826332  878634 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:11:00.826423  878634 start.go:349] cluster config:
	{Name:force-systemd-env-969063 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-969063 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:11:00.829616  878634 out.go:179] * Starting "force-systemd-env-969063" primary control-plane node in "force-systemd-env-969063" cluster
	I1026 15:11:00.832462  878634 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:11:00.835389  878634 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:11:00.838222  878634 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:11:00.838285  878634 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:11:00.838298  878634 cache.go:58] Caching tarball of preloaded images
	I1026 15:11:00.838312  878634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:11:00.838381  878634 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:11:00.838391  878634 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:11:00.838500  878634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/force-systemd-env-969063/config.json ...
	I1026 15:11:00.838523  878634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/force-systemd-env-969063/config.json: {Name:mk0645029bc7abde238533bd999197e2691e01a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:11:00.858023  878634 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:11:00.858050  878634 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:11:00.858064  878634 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:11:00.858096  878634 start.go:360] acquireMachinesLock for force-systemd-env-969063: {Name:mk0a8b274aacb71c750bf4fb27f4bdfea5670c13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:11:00.858202  878634 start.go:364] duration metric: took 89.568µs to acquireMachinesLock for "force-systemd-env-969063"
	I1026 15:11:00.858233  878634 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-969063 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-969063 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:11:00.858299  878634 start.go:125] createHost starting for "" (driver="docker")
	W1026 15:10:58.864675  876003 pod_ready.go:104] pod "kube-apiserver-pause-013921" is not "Ready", error: <nil>
	I1026 15:11:00.367627  876003 pod_ready.go:94] pod "kube-apiserver-pause-013921" is "Ready"
	I1026 15:11:00.367657  876003 pod_ready.go:86] duration metric: took 3.509126086s for pod "kube-apiserver-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:00.371301  876003 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.378485  876003 pod_ready.go:94] pod "kube-controller-manager-pause-013921" is "Ready"
	I1026 15:11:02.378569  876003 pod_ready.go:86] duration metric: took 2.007241545s for pod "kube-controller-manager-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.381658  876003 pod_ready.go:83] waiting for pod "kube-proxy-wgqtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.387274  876003 pod_ready.go:94] pod "kube-proxy-wgqtw" is "Ready"
	I1026 15:11:02.387297  876003 pod_ready.go:86] duration metric: took 5.615071ms for pod "kube-proxy-wgqtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.390137  876003 pod_ready.go:83] waiting for pod "kube-scheduler-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.542931  876003 pod_ready.go:94] pod "kube-scheduler-pause-013921" is "Ready"
	I1026 15:11:02.542961  876003 pod_ready.go:86] duration metric: took 152.75055ms for pod "kube-scheduler-pause-013921" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:02.542976  876003 pod_ready.go:40] duration metric: took 9.266575087s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:02.604030  876003 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:11:02.607596  876003 out.go:179] * Done! kubectl is now configured to use "pause-013921" cluster and "default" namespace by default
	I1026 15:11:00.863561  878634 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:11:00.863825  878634 start.go:159] libmachine.API.Create for "force-systemd-env-969063" (driver="docker")
	I1026 15:11:00.863878  878634 client.go:168] LocalClient.Create starting
	I1026 15:11:00.863953  878634 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:11:00.863989  878634 main.go:141] libmachine: Decoding PEM data...
	I1026 15:11:00.864006  878634 main.go:141] libmachine: Parsing certificate...
	I1026 15:11:00.864063  878634 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:11:00.864084  878634 main.go:141] libmachine: Decoding PEM data...
	I1026 15:11:00.864099  878634 main.go:141] libmachine: Parsing certificate...
	I1026 15:11:00.864512  878634 cli_runner.go:164] Run: docker network inspect force-systemd-env-969063 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:11:00.886143  878634 cli_runner.go:211] docker network inspect force-systemd-env-969063 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:11:00.886242  878634 network_create.go:284] running [docker network inspect force-systemd-env-969063] to gather additional debugging logs...
	I1026 15:11:00.886259  878634 cli_runner.go:164] Run: docker network inspect force-systemd-env-969063
	W1026 15:11:00.902529  878634 cli_runner.go:211] docker network inspect force-systemd-env-969063 returned with exit code 1
	I1026 15:11:00.902559  878634 network_create.go:287] error running [docker network inspect force-systemd-env-969063]: docker network inspect force-systemd-env-969063: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-969063 not found
	I1026 15:11:00.902574  878634 network_create.go:289] output of [docker network inspect force-systemd-env-969063]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-969063 not found
	
	** /stderr **
	I1026 15:11:00.902696  878634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:11:00.921316  878634 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:11:00.921690  878634 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:11:00.922040  878634 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:11:00.922504  878634 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a08300}
	I1026 15:11:00.922529  878634 network_create.go:124] attempt to create docker network force-systemd-env-969063 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:11:00.922592  878634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-969063 force-systemd-env-969063
	I1026 15:11:00.983730  878634 network_create.go:108] docker network force-systemd-env-969063 192.168.76.0/24 created
	I1026 15:11:00.983767  878634 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-969063" container
	I1026 15:11:00.983843  878634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:11:01.002322  878634 cli_runner.go:164] Run: docker volume create force-systemd-env-969063 --label name.minikube.sigs.k8s.io=force-systemd-env-969063 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:11:01.022526  878634 oci.go:103] Successfully created a docker volume force-systemd-env-969063
	I1026 15:11:01.022622  878634 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-969063-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-969063 --entrypoint /usr/bin/test -v force-systemd-env-969063:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:11:01.565004  878634 oci.go:107] Successfully prepared a docker volume force-systemd-env-969063
	I1026 15:11:01.565064  878634 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:11:01.565088  878634 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:11:01.565175  878634 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-969063:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.275673634Z" level=info msg="Created container abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a: kube-system/kube-apiserver-pause-013921/kube-apiserver" id=a84bf22b-d3fb-4784-8bf8-c658b7cc7187 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.276679545Z" level=info msg="Starting container: 5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb" id=01e4f056-a069-47f1-afe8-4692cfe93e6a name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.278703043Z" level=info msg="Created container 914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4: kube-system/kindnet-kp4sz/kindnet-cni" id=8dd7f813-a366-42f7-b808-08c11e3ecbbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.282837211Z" level=info msg="Starting container: 398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b" id=fcb1ed8d-3fb7-4dec-af0b-0d15b174e985 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.284864894Z" level=info msg="Starting container: abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a" id=3dac5819-f992-42bd-84b5-9f664b6d3bbd name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.287174664Z" level=info msg="Started container" PID=2320 containerID=398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b description=kube-system/etcd-pause-013921/etcd id=fcb1ed8d-3fb7-4dec-af0b-0d15b174e985 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edf8ce0e1d2b4b0b7280cc980e4fe7d2a5dba0ad72ab00142970446e399f98cc
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.288587726Z" level=info msg="Started container" PID=2316 containerID=5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb description=kube-system/kube-scheduler-pause-013921/kube-scheduler id=01e4f056-a069-47f1-afe8-4692cfe93e6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2990e8dd0973ccd6ee48a1287e63d86c244251c0fcdad5925c3f3eb556eb6e99
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.303563502Z" level=info msg="Started container" PID=2323 containerID=abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a description=kube-system/kube-apiserver-pause-013921/kube-apiserver id=3dac5819-f992-42bd-84b5-9f664b6d3bbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5895f9361120e9b34523d661eed9e740b377a50880b7198caf0d9bb8d0c1e79
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.306028219Z" level=info msg="Starting container: 914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4" id=1d6e3676-b4aa-48c5-8627-ceabdc7eebb7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:10:44 pause-013921 crio[2087]: time="2025-10-26T15:10:44.313856296Z" level=info msg="Started container" PID=2318 containerID=914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4 description=kube-system/kindnet-kp4sz/kindnet-cni id=1d6e3676-b4aa-48c5-8627-ceabdc7eebb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=575af721feba4784a6f58f490d6cc1bc9b59cf56cfee964d23dc3651b2821cee
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.755361741Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.759595824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.759758123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.75983759Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.76690213Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.766946135Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.766970677Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.771591307Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.771759843Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.771833764Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.775518302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.775674119Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.775748885Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.781050426Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:10:54 pause-013921 crio[2087]: time="2025-10-26T15:10:54.78122651Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	abdc643b1629a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   a5895f9361120       kube-apiserver-pause-013921            kube-system
	5a01a6c0046b0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   2990e8dd0973c       kube-scheduler-pause-013921            kube-system
	398230665f3c2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   edf8ce0e1d2b4       etcd-pause-013921                      kube-system
	914737e35df86       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   575af721feba4       kindnet-kp4sz                          kube-system
	e13e4a5e50873       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   24a269c2acf3f       kube-controller-manager-pause-013921   kube-system
	35849e728c577       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   736b017f96663       kube-proxy-wgqtw                       kube-system
	eb9ec360c79c1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   aa7a4ab7085d3       coredns-66bc5c9577-m4gxc               kube-system
	37abefbe208d8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago       Exited              coredns                   0                   aa7a4ab7085d3       coredns-66bc5c9577-m4gxc               kube-system
	0e92ae20df55d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   575af721feba4       kindnet-kp4sz                          kube-system
	704aaf411be74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   736b017f96663       kube-proxy-wgqtw                       kube-system
	670a5333bcde4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   2990e8dd0973c       kube-scheduler-pause-013921            kube-system
	7d264f78e16f6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   24a269c2acf3f       kube-controller-manager-pause-013921   kube-system
	e6a305cf0786e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   a5895f9361120       kube-apiserver-pause-013921            kube-system
	6a188fa34356e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   edf8ce0e1d2b4       etcd-pause-013921                      kube-system
	
	
	==> coredns [37abefbe208d829b48a5663f6cc3a302e6cfa8ca844a962be1b40bc04483726e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46252 - 14709 "HINFO IN 8155846183561583170.8481793397817331275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056001713s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb9ec360c79c17be93409d1ec23c1e93ef3d42937f60d3e49f44310a90b0756a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36946 - 35190 "HINFO IN 246363323977596506.3270707713582301445. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030286391s
	
	
	==> describe nodes <==
	Name:               pause-013921
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-013921
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=pause-013921
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_09_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:09:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-013921
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:11:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:09:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:09:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:09:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:10:30 +0000   Sun, 26 Oct 2025 15:10:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-013921
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                818d2ec3-5a20-4cb5-90d8-6e7470e58faf
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-m4gxc                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-013921                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-kp4sz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      81s
	  kube-system                 kube-apiserver-pause-013921             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-013921    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-wgqtw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-pause-013921             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 79s   kube-proxy       
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 86s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 86s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  86s   kubelet          Node pause-013921 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s   kubelet          Node pause-013921 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s   kubelet          Node pause-013921 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s   node-controller  Node pause-013921 event: Registered Node pause-013921 in Controller
	  Normal   NodeReady                39s   kubelet          Node pause-013921 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-013921 event: Registered Node pause-013921 in Controller
	
	
	==> dmesg <==
	[Oct26 14:44] overlayfs: idmapped layers are currently not supported
	[Oct26 14:45] overlayfs: idmapped layers are currently not supported
	[  +3.305180] overlayfs: idmapped layers are currently not supported
	[ +47.970712] overlayfs: idmapped layers are currently not supported
	[Oct26 14:46] overlayfs: idmapped layers are currently not supported
	[Oct26 14:47] overlayfs: idmapped layers are currently not supported
	[Oct26 14:52] overlayfs: idmapped layers are currently not supported
	[Oct26 14:53] overlayfs: idmapped layers are currently not supported
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [398230665f3c2b61a9eadcc5557aa1cf6cf3ef82fb0dfe75b3d328a52b9bb61b] <==
	{"level":"warn","ts":"2025-10-26T15:10:50.332201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.366162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.440027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.453128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.469725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.502086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.527141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.556817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.576681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.598558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.626865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.645874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.670753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.682969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.706277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.727334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.758849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.776768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.797131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.823110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.843167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.868597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.883777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.900340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:50.995069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	
	
	==> etcd [6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765] <==
	{"level":"warn","ts":"2025-10-26T15:09:39.826707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.842839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.878094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.931696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.949283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:39.963955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:09:40.040141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:10:34.962702Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T15:10:34.962763Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-013921","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-26T15:10:34.962864Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T15:10:35.148144Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T15:10:35.148241Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T15:10:35.148262Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-26T15:10:35.148367Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T15:10:35.148388Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148757Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148800Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T15:10:35.148809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148895Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T15:10:35.148910Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T15:10:35.148917Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T15:10:35.151825Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-26T15:10:35.151925Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T15:10:35.151971Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:10:35.151978Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-013921","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 15:11:09 up  4:53,  0 user,  load average: 4.27, 3.95, 2.87
	Linux pause-013921 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189] <==
	I1026 15:09:49.434538       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:09:49.436552       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:09:49.437401       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:09:49.437429       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:09:49.437445       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:09:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:09:49.649647       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:09:49.649679       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:09:49.649688       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:09:49.650411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:10:19.650612       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:10:19.650809       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:10:19.651028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:10:19.651158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:10:21.049862       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:10:21.049927       1 metrics.go:72] Registering metrics
	I1026 15:10:21.050000       1 controller.go:711] "Syncing nftables rules"
	I1026 15:10:29.649906       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:10:29.649979       1 main.go:301] handling current node
	
	
	==> kindnet [914737e35df86bcd438c21875ed89d46a56584a911a3a6b4b83ed368cb7e44a4] <==
	I1026 15:10:44.504426       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:10:44.535837       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:10:44.535985       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:10:44.535998       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:10:44.536012       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:10:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:10:44.765235       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:10:44.765333       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:10:44.765367       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:10:44.769540       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:10:52.279340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:10:52.279543       1 metrics.go:72] Registering metrics
	I1026 15:10:52.279636       1 controller.go:711] "Syncing nftables rules"
	I1026 15:10:54.754974       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:10:54.755065       1 main.go:301] handling current node
	I1026 15:11:04.756419       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:11:04.756473       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abdc643b1629a692b524a39d8d42365c4b7eb78d22044b1705b9a9abf747bd0a] <==
	I1026 15:10:52.237208       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 15:10:52.237858       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:10:52.238891       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:10:52.239130       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:10:52.239266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:10:52.244691       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:10:52.251203       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:10:52.251271       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:10:52.251395       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:10:52.252016       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:10:52.252050       1 policy_source.go:240] refreshing policies
	I1026 15:10:52.252241       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:10:52.288208       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:10:52.317463       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:10:52.317562       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:10:52.317596       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:10:52.310000       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:10:52.317440       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1026 15:10:52.354824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:10:52.739434       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:10:54.153862       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:10:55.552083       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:10:55.754380       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:10:55.805752       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:10:55.905431       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [e6a305cf0786e2aad36171947c363b09df82a1e38658dcc14e6d059f4e70bde9] <==
	W1026 15:10:35.003755       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.003854       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.003955       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004079       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004177       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004290       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004422       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.004098       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008227       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008480       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008655       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008847       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.008955       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009057       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009252       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009406       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009493       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009600       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009697       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009697       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009795       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009862       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.009949       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.010049       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 15:10:35.011985       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7d264f78e16f6ebb0288181f1d512d12ee03dbff850c905df2ffb045c57b4da6] <==
	I1026 15:09:48.062255       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:09:48.062881       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:09:48.063607       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:09:48.063674       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:09:48.064340       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:09:48.064491       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:09:48.064541       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:09:48.064613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:09:48.067113       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:09:48.067156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:09:48.067551       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:09:48.067766       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:09:48.067775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:09:48.067780       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:09:48.067838       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:09:48.067853       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:09:48.072042       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:09:48.076184       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:09:48.076233       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:09:48.076255       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:09:48.076260       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:09:48.076265       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:09:48.078069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:09:48.090335       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-013921" podCIDRs=["10.244.0.0/24"]
	I1026 15:10:33.118982       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [e13e4a5e5087386a946a8787e9c00aa3b98d692a57e746f4c139d1959a1fd662] <==
	I1026 15:10:55.549779       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:10:55.549974       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:10:55.552223       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:10:55.552369       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:10:55.552540       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-013921"
	I1026 15:10:55.553061       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:10:55.555466       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:10:55.562074       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:10:55.565403       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:10:55.566644       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:10:55.577116       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:10:55.582412       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:10:55.590740       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:10:55.590836       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:10:55.590870       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:10:55.596059       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:10:55.596158       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:10:55.596177       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:10:55.596186       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:10:55.596868       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:10:55.602483       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:10:55.602617       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:10:55.604836       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:10:55.606532       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:10:55.618663       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [35849e728c577734171fde5d429ba0916a28decf50dbf44d86f32de2593e312a] <==
	I1026 15:10:44.199836       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:10:46.429913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:10:52.354391       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:10:52.354494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:10:52.354583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:10:52.497134       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:10:52.497254       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:10:52.527754       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:10:52.528310       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:10:52.528597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:10:52.530736       1 config.go:200] "Starting service config controller"
	I1026 15:10:52.530812       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:10:52.530856       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:10:52.530903       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:10:52.530941       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:10:52.530976       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:10:52.532729       1 config.go:309] "Starting node config controller"
	I1026 15:10:52.533083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:10:52.533132       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:10:52.631633       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:10:52.632194       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:10:52.632223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [704aaf411be7483ceb755139de98cf5d875037d632ffe53352eef60c60e3b1f6] <==
	I1026 15:09:49.455589       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:09:49.542482       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:09:49.643052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:09:49.643088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:09:49.643175       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:09:49.738506       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:09:49.738557       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:09:49.792607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:09:49.834564       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:09:49.834599       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:09:49.840468       1 config.go:200] "Starting service config controller"
	I1026 15:09:49.840494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:09:49.840519       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:09:49.840523       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:09:49.840536       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:09:49.840540       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:09:49.849047       1 config.go:309] "Starting node config controller"
	I1026 15:09:49.849073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:09:49.849098       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:09:49.942046       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:09:49.942146       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:09:49.942453       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5a01a6c0046b0aca6f1561f8bdf5869f13c40c137d03d26477ad1cc713349dfb] <==
	I1026 15:10:49.607903       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:10:52.137223       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:10:52.137341       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:10:52.137376       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:10:52.137428       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:10:52.281607       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:10:52.281647       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:10:52.284178       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:10:52.284619       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:52.284691       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:52.284748       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:10:52.385635       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [670a5333bcde4a85128035b6a2625af2679daaf9a076613625fe8fb0dafef960] <==
	E1026 15:09:41.632779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:09:41.632895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:09:41.632995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:09:41.633092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:09:41.633224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:09:41.633279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:09:41.633352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:09:41.634351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:09:41.634371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:09:41.634426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:09:41.634473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:09:41.634515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:09:41.634556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:09:41.634624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:09:41.634634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:09:41.634802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:09:41.635285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:09:41.635397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 15:09:42.918772       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:34.986952       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 15:10:34.986979       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 15:10:34.987012       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 15:10:34.987037       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:10:34.987184       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 15:10:34.987200       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 26 15:10:43 pause-013921 kubelet[1318]: I1026 15:10:43.954188    1318 scope.go:117] "RemoveContainer" containerID="6a188fa34356e8fd4e7aa1f80b8d110d6b28027c929fc30cb85d27fe0bf4e765"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.954745    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-m4gxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="835b87e8-794b-4354-b47d-a7a0a01bb07a" pod="kube-system/coredns-66bc5c9577-m4gxc"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.954948    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc48965d5ca0f8aa2a1e2f968dd258dd" pod="kube-system/kube-apiserver-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.955101    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1d424c30a8042d1f3df678c28b12424d" pod="kube-system/kube-controller-manager-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.955916    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ee84abc68093979c30db64ad57962b9" pod="kube-system/kube-scheduler-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.956215    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="241995cde8a23636ea25b16fc26570bf" pod="kube-system/etcd-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.956458    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wgqtw\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba6397bd-2e60-4dda-8795-8b9077a43fac" pod="kube-system/kube-proxy-wgqtw"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: I1026 15:10:43.990690    1318 scope.go:117] "RemoveContainer" containerID="0e92ae20df55df51ed0feeaf22ce2e4d110936b8a3f6e5829940eb5c53ff4189"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991294    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-kp4sz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="54ccaa27-d65d-4e68-9e16-5e62d7de2cc7" pod="kube-system/kindnet-kp4sz"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991472    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-m4gxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="835b87e8-794b-4354-b47d-a7a0a01bb07a" pod="kube-system/coredns-66bc5c9577-m4gxc"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991654    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc48965d5ca0f8aa2a1e2f968dd258dd" pod="kube-system/kube-apiserver-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991818    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1d424c30a8042d1f3df678c28b12424d" pod="kube-system/kube-controller-manager-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.991980    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ee84abc68093979c30db64ad57962b9" pod="kube-system/kube-scheduler-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.992152    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-013921\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="241995cde8a23636ea25b16fc26570bf" pod="kube-system/etcd-pause-013921"
	Oct 26 15:10:43 pause-013921 kubelet[1318]: E1026 15:10:43.992298    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wgqtw\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba6397bd-2e60-4dda-8795-8b9077a43fac" pod="kube-system/kube-proxy-wgqtw"
	Oct 26 15:10:51 pause-013921 kubelet[1318]: E1026 15:10:51.983260    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-013921\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 26 15:10:51 pause-013921 kubelet[1318]: E1026 15:10:51.983305    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-013921\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 26 15:10:51 pause-013921 kubelet[1318]: E1026 15:10:51.989012    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-kp4sz\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="54ccaa27-d65d-4e68-9e16-5e62d7de2cc7" pod="kube-system/kindnet-kp4sz"
	Oct 26 15:10:52 pause-013921 kubelet[1318]: E1026 15:10:52.056338    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-m4gxc\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="835b87e8-794b-4354-b47d-a7a0a01bb07a" pod="kube-system/coredns-66bc5c9577-m4gxc"
	Oct 26 15:10:52 pause-013921 kubelet[1318]: E1026 15:10:52.127619    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-013921\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="cc48965d5ca0f8aa2a1e2f968dd258dd" pod="kube-system/kube-apiserver-pause-013921"
	Oct 26 15:10:52 pause-013921 kubelet[1318]: E1026 15:10:52.178334    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-013921\" is forbidden: User \"system:node:pause-013921\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-013921' and this object" podUID="1d424c30a8042d1f3df678c28b12424d" pod="kube-system/kube-controller-manager-pause-013921"
	Oct 26 15:10:53 pause-013921 kubelet[1318]: W1026 15:10:53.996724    1318 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 26 15:11:03 pause-013921 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:11:03 pause-013921 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:11:03 pause-013921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-013921 -n pause-013921
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-013921 -n pause-013921: exit status 2 (419.346279ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-013921 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.425727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-304880 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-304880 describe deploy/metrics-server -n kube-system: exit status 1 (92.675336ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-304880 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-304880
helpers_test.go:243: (dbg) docker inspect old-k8s-version-304880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e",
	        "Created": "2025-10-26T15:12:25.477698676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 887768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:12:25.538576908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/hostname",
	        "HostsPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/hosts",
	        "LogPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e-json.log",
	        "Name": "/old-k8s-version-304880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-304880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-304880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e",
	                "LowerDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-304880",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-304880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-304880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-304880",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-304880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e01d23599ae43d0ebf68357be42c941c1530e4e9e65ceace12262dce2ed549eb",
	            "SandboxKey": "/var/run/docker/netns/e01d23599ae4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-304880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:84:b3:ef:7e:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "898d058c113eb58f8fe58567875d58d2d8a62f1424e6f7b780d853a2a1be653f",
	                    "EndpointID": "a407eacc1de7232c6642273e241f4929baa52aef2b599aacd6e6cf88c8d6ca8f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-304880",
	                        "47abca8f012a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-304880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-304880 logs -n 25: (1.213165481s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-337407 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo containerd config dump                                                                                                                                                                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo crio config                                                                                                                                                                                                             │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p cilium-337407                                                                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ pause   │ -p pause-013921 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p pause-013921                                                                                                                                                                                                                               │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-963871   │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ delete  │ -p force-systemd-env-969063                                                                                                                                                                                                                   │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ cert-options-209492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ -p cert-options-209492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:12:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:12:18.730474  887383 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:18.730597  887383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:18.730607  887383 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:18.730612  887383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:18.730851  887383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:12:18.731267  887383 out.go:368] Setting JSON to false
	I1026 15:12:18.732236  887383 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17691,"bootTime":1761473848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:12:18.732306  887383 start.go:141] virtualization:  
	I1026 15:12:18.736307  887383 out.go:179] * [old-k8s-version-304880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:12:18.741071  887383 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:12:18.741162  887383 notify.go:220] Checking for updates...
	I1026 15:12:18.748220  887383 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:12:18.751678  887383 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:12:18.754956  887383 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:12:18.758159  887383 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:12:18.761229  887383 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:12:18.764873  887383 config.go:182] Loaded profile config "cert-expiration-963871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:18.765005  887383 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:12:18.790953  887383 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:12:18.791091  887383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:18.859633  887383 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:12:18.849805642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:12:18.859753  887383 docker.go:318] overlay module found
	I1026 15:12:18.862974  887383 out.go:179] * Using the docker driver based on user configuration
	I1026 15:12:18.865944  887383 start.go:305] selected driver: docker
	I1026 15:12:18.865964  887383 start.go:925] validating driver "docker" against <nil>
	I1026 15:12:18.865979  887383 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:12:18.866727  887383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:18.922840  887383 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:12:18.913279432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:12:18.922993  887383 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:12:18.923226  887383 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:12:18.926256  887383 out.go:179] * Using Docker driver with root privileges
	I1026 15:12:18.929209  887383 cni.go:84] Creating CNI manager for ""
	I1026 15:12:18.929287  887383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:18.929305  887383 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:12:18.929384  887383 start.go:349] cluster config:
	{Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:18.932485  887383 out.go:179] * Starting "old-k8s-version-304880" primary control-plane node in "old-k8s-version-304880" cluster
	I1026 15:12:18.935436  887383 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:12:18.938503  887383 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:12:18.941288  887383 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:12:18.941352  887383 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 15:12:18.941366  887383 cache.go:58] Caching tarball of preloaded images
	I1026 15:12:18.941455  887383 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:12:18.941468  887383 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 15:12:18.941587  887383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json ...
	I1026 15:12:18.941612  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json: {Name:mk2f2b7aa9d006d69d175d39650c38e54c33cd8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:18.941783  887383 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:12:18.964529  887383 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:12:18.964550  887383 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:12:18.964567  887383 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:12:18.964590  887383 start.go:360] acquireMachinesLock for old-k8s-version-304880: {Name:mk7199322885b6a14cdd6d843ed9457416dde222 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:12:18.964730  887383 start.go:364] duration metric: took 120.436µs to acquireMachinesLock for "old-k8s-version-304880"
	I1026 15:12:18.964761  887383 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:12:18.964847  887383 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:12:18.968460  887383 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:12:18.968781  887383 start.go:159] libmachine.API.Create for "old-k8s-version-304880" (driver="docker")
	I1026 15:12:18.968836  887383 client.go:168] LocalClient.Create starting
	I1026 15:12:18.968915  887383 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:12:18.968957  887383 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:18.968978  887383 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:18.969036  887383 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:12:18.969064  887383 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:18.969074  887383 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:18.969426  887383 cli_runner.go:164] Run: docker network inspect old-k8s-version-304880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:12:18.985931  887383 cli_runner.go:211] docker network inspect old-k8s-version-304880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:12:18.986012  887383 network_create.go:284] running [docker network inspect old-k8s-version-304880] to gather additional debugging logs...
	I1026 15:12:18.986034  887383 cli_runner.go:164] Run: docker network inspect old-k8s-version-304880
	W1026 15:12:19.000545  887383 cli_runner.go:211] docker network inspect old-k8s-version-304880 returned with exit code 1
	I1026 15:12:19.000586  887383 network_create.go:287] error running [docker network inspect old-k8s-version-304880]: docker network inspect old-k8s-version-304880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-304880 not found
	I1026 15:12:19.000600  887383 network_create.go:289] output of [docker network inspect old-k8s-version-304880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-304880 not found
	
	** /stderr **
	I1026 15:12:19.000785  887383 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:19.021429  887383 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:12:19.021822  887383 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:12:19.022179  887383 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:12:19.022625  887383 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a295d0}
	I1026 15:12:19.022649  887383 network_create.go:124] attempt to create docker network old-k8s-version-304880 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:12:19.022723  887383 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-304880 old-k8s-version-304880
	I1026 15:12:19.085139  887383 network_create.go:108] docker network old-k8s-version-304880 192.168.76.0/24 created
	I1026 15:12:19.085171  887383 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-304880" container
	I1026 15:12:19.085246  887383 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:12:19.103382  887383 cli_runner.go:164] Run: docker volume create old-k8s-version-304880 --label name.minikube.sigs.k8s.io=old-k8s-version-304880 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:12:19.120989  887383 oci.go:103] Successfully created a docker volume old-k8s-version-304880
	I1026 15:12:19.121076  887383 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-304880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-304880 --entrypoint /usr/bin/test -v old-k8s-version-304880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:12:19.674869  887383 oci.go:107] Successfully prepared a docker volume old-k8s-version-304880
	I1026 15:12:19.674920  887383 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:12:19.674940  887383 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:12:19.675015  887383 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-304880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 15:12:25.402950  887383 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-304880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.72788942s)
	I1026 15:12:25.402984  887383 kic.go:203] duration metric: took 5.728039584s to extract preloaded images to volume ...
	W1026 15:12:25.403133  887383 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 15:12:25.403243  887383 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:12:25.461414  887383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-304880 --name old-k8s-version-304880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-304880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-304880 --network old-k8s-version-304880 --ip 192.168.76.2 --volume old-k8s-version-304880:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:12:25.775169  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Running}}
	I1026 15:12:25.796787  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:12:25.820872  887383 cli_runner.go:164] Run: docker exec old-k8s-version-304880 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:12:25.876433  887383 oci.go:144] the created container "old-k8s-version-304880" has a running status.
	I1026 15:12:25.876472  887383 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa...
	I1026 15:12:26.690193  887383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:12:26.714658  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:12:26.731855  887383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:12:26.731887  887383 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-304880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:12:26.773011  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:12:26.789606  887383 machine.go:93] provisionDockerMachine start ...
	I1026 15:12:26.789720  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:26.807998  887383 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:26.808344  887383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1026 15:12:26.808359  887383 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:12:26.809132  887383 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:12:29.964529  887383 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-304880
	
	I1026 15:12:29.964550  887383 ubuntu.go:182] provisioning hostname "old-k8s-version-304880"
	I1026 15:12:29.964613  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:29.983355  887383 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.983654  887383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1026 15:12:29.983666  887383 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-304880 && echo "old-k8s-version-304880" | sudo tee /etc/hostname
	I1026 15:12:30.181534  887383 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-304880
	
	I1026 15:12:30.181630  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:30.201938  887383 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:30.202250  887383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1026 15:12:30.202282  887383 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-304880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-304880/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-304880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:12:30.353358  887383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:12:30.353402  887383 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:12:30.353421  887383 ubuntu.go:190] setting up certificates
	I1026 15:12:30.353432  887383 provision.go:84] configureAuth start
	I1026 15:12:30.353516  887383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:12:30.378027  887383 provision.go:143] copyHostCerts
	I1026 15:12:30.378138  887383 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:12:30.378149  887383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:12:30.378267  887383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:12:30.378441  887383 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:12:30.378448  887383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:12:30.378500  887383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:12:30.378581  887383 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:12:30.378587  887383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:12:30.378635  887383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:12:30.378722  887383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-304880 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-304880]
	I1026 15:12:30.843643  887383 provision.go:177] copyRemoteCerts
	I1026 15:12:30.843712  887383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:12:30.843753  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:30.863437  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:12:30.969654  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:12:30.988447  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:12:31.010964  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 15:12:31.030140  887383 provision.go:87] duration metric: took 676.67949ms to configureAuth
	I1026 15:12:31.030210  887383 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:12:31.030424  887383 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:12:31.030567  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:31.048076  887383 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:31.048415  887383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1026 15:12:31.048431  887383 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:12:31.311195  887383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:12:31.311223  887383 machine.go:96] duration metric: took 4.521594798s to provisionDockerMachine
	I1026 15:12:31.311233  887383 client.go:171] duration metric: took 12.342388179s to LocalClient.Create
	I1026 15:12:31.311244  887383 start.go:167] duration metric: took 12.342465365s to libmachine.API.Create "old-k8s-version-304880"
	I1026 15:12:31.311270  887383 start.go:293] postStartSetup for "old-k8s-version-304880" (driver="docker")
	I1026 15:12:31.311290  887383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:12:31.311373  887383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:12:31.311433  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:31.328902  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:12:31.436804  887383 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:12:31.441410  887383 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:12:31.441480  887383 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:12:31.441499  887383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:12:31.441557  887383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:12:31.441649  887383 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:12:31.441756  887383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:12:31.449540  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:12:31.469366  887383 start.go:296] duration metric: took 158.066986ms for postStartSetup
	I1026 15:12:31.469765  887383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:12:31.491229  887383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json ...
	I1026 15:12:31.491530  887383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:12:31.491578  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:31.513079  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:12:31.613879  887383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:12:31.618615  887383 start.go:128] duration metric: took 12.653751193s to createHost
	I1026 15:12:31.618638  887383 start.go:83] releasing machines lock for "old-k8s-version-304880", held for 12.653893865s
	I1026 15:12:31.618725  887383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:12:31.635501  887383 ssh_runner.go:195] Run: cat /version.json
	I1026 15:12:31.635555  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:31.635577  887383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:12:31.635637  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:12:31.655207  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:12:31.655724  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:12:31.756394  887383 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:31.851570  887383 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:12:31.887361  887383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:12:31.891647  887383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:12:31.891719  887383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:12:31.921243  887383 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 15:12:31.921269  887383 start.go:495] detecting cgroup driver to use...
	I1026 15:12:31.921324  887383 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:12:31.921404  887383 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:12:31.939081  887383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:12:31.952175  887383 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:12:31.952300  887383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:12:31.971203  887383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:12:31.990546  887383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:12:32.124789  887383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:12:32.251264  887383 docker.go:234] disabling docker service ...
	I1026 15:12:32.251337  887383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:12:32.274669  887383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:12:32.294723  887383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:12:32.425405  887383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:12:32.563766  887383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:12:32.579095  887383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:12:32.598564  887383 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 15:12:32.598637  887383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.609879  887383 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:12:32.609969  887383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.619192  887383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.628307  887383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.639542  887383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:12:32.648199  887383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.657306  887383 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.673091  887383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:32.683042  887383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:12:32.691083  887383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:12:32.698929  887383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:32.820811  887383 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:12:32.954535  887383 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:12:32.954623  887383 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:12:32.958669  887383 start.go:563] Will wait 60s for crictl version
	I1026 15:12:32.958748  887383 ssh_runner.go:195] Run: which crictl
	I1026 15:12:32.962538  887383 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:12:32.990402  887383 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:12:32.990522  887383 ssh_runner.go:195] Run: crio --version
	I1026 15:12:33.030849  887383 ssh_runner.go:195] Run: crio --version
	I1026 15:12:33.068685  887383 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 15:12:33.071496  887383 cli_runner.go:164] Run: docker network inspect old-k8s-version-304880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:33.088855  887383 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:12:33.092868  887383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:33.103458  887383 kubeadm.go:883] updating cluster {Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:12:33.103609  887383 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:12:33.103691  887383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:33.146676  887383 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:33.146705  887383 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:12:33.146766  887383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:33.177216  887383 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:33.177238  887383 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:12:33.177246  887383 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1026 15:12:33.177343  887383 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-304880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:12:33.177427  887383 ssh_runner.go:195] Run: crio config
	I1026 15:12:33.253750  887383 cni.go:84] Creating CNI manager for ""
	I1026 15:12:33.253780  887383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:33.253794  887383 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:12:33.253928  887383 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-304880 NodeName:old-k8s-version-304880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:12:33.254123  887383 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-304880"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:12:33.254209  887383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 15:12:33.262663  887383 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:12:33.262742  887383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:12:33.270801  887383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 15:12:33.286099  887383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:12:33.301950  887383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1026 15:12:33.316274  887383 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:12:33.320314  887383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:33.330980  887383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:33.461608  887383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:12:33.479704  887383 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880 for IP: 192.168.76.2
	I1026 15:12:33.479738  887383 certs.go:195] generating shared ca certs ...
	I1026 15:12:33.479754  887383 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.479910  887383 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:12:33.479965  887383 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:12:33.479977  887383 certs.go:257] generating profile certs ...
	I1026 15:12:33.480034  887383 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.key
	I1026 15:12:33.480061  887383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt with IP's: []
	I1026 15:12:34.198612  887383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt ...
	I1026 15:12:34.198645  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: {Name:mk19a300b56847c0e0a9def0972b30bdc3a7a88e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:34.198857  887383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.key ...
	I1026 15:12:34.198876  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.key: {Name:mk68ba93ea6333e4b27a3c103e3e198a0cf25f8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:34.198967  887383 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key.2229c60e
	I1026 15:12:34.198987  887383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt.2229c60e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:12:34.659577  887383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt.2229c60e ...
	I1026 15:12:34.659607  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt.2229c60e: {Name:mkd5611e4d50a3e40f39d2934ed4882c1a6ddfd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:34.659792  887383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key.2229c60e ...
	I1026 15:12:34.659808  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key.2229c60e: {Name:mk2ea928648772aa02b987210da012db9cdc06cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:34.659898  887383 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt.2229c60e -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt
	I1026 15:12:34.659979  887383 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key.2229c60e -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key
	I1026 15:12:34.660044  887383 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key
	I1026 15:12:34.660062  887383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.crt with IP's: []
	I1026 15:12:34.772382  887383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.crt ...
	I1026 15:12:34.772413  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.crt: {Name:mk78dc541f2ac39d5ec71dd364b672ac5e29a4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:34.772604  887383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key ...
	I1026 15:12:34.772621  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key: {Name:mkab935c1d46692d19f122f4a3a39354d570932d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:34.772824  887383 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:12:34.772903  887383 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:12:34.772920  887383 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:12:34.772945  887383 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:12:34.772970  887383 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:12:34.773000  887383 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:12:34.773050  887383 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:12:34.773618  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:12:34.793791  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:12:34.812808  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:12:34.830754  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:12:34.849711  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:12:34.870071  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:12:34.889511  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:12:34.907993  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:12:34.926647  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:12:34.945881  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:12:34.965998  887383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:12:34.985204  887383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:12:35.004529  887383 ssh_runner.go:195] Run: openssl version
	I1026 15:12:35.013364  887383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:12:35.024531  887383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:12:35.029816  887383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:12:35.029972  887383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:12:35.074859  887383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:12:35.084094  887383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:12:35.093900  887383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:12:35.098427  887383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:12:35.098525  887383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:12:35.140309  887383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:12:35.150185  887383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:12:35.159108  887383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:35.163515  887383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:35.163621  887383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:35.210388  887383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:12:35.219083  887383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:12:35.222975  887383 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:12:35.223052  887383 kubeadm.go:400] StartCluster: {Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:35.223131  887383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:12:35.223188  887383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:12:35.254342  887383 cri.go:89] found id: ""
	I1026 15:12:35.254493  887383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:12:35.262842  887383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:12:35.271424  887383 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:12:35.271558  887383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:12:35.279987  887383 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:12:35.280051  887383 kubeadm.go:157] found existing configuration files:
	
	I1026 15:12:35.280119  887383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:12:35.290029  887383 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:12:35.290145  887383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:12:35.298715  887383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:12:35.306635  887383 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:12:35.306723  887383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:12:35.314593  887383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:12:35.322289  887383 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:12:35.322376  887383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:12:35.329803  887383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:12:35.337517  887383 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:12:35.337588  887383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:12:35.345120  887383 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:12:35.399152  887383 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1026 15:12:35.399431  887383 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:12:35.440670  887383 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:12:35.440792  887383 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 15:12:35.440851  887383 kubeadm.go:318] OS: Linux
	I1026 15:12:35.440915  887383 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:12:35.440985  887383 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 15:12:35.441049  887383 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:12:35.441116  887383 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:12:35.441182  887383 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:12:35.441250  887383 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:12:35.441314  887383 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:12:35.441379  887383 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:12:35.441444  887383 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 15:12:35.545561  887383 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:12:35.545732  887383 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:12:35.545866  887383 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 15:12:35.709186  887383 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:12:35.714264  887383 out.go:252]   - Generating certificates and keys ...
	I1026 15:12:35.714410  887383 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:12:35.714515  887383 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:12:35.963849  887383 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:12:36.520903  887383 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:12:36.912232  887383 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:12:37.377044  887383 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:12:37.786693  887383 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:12:37.787060  887383 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-304880] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:38.231348  887383 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:12:38.231719  887383 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-304880] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:38.854174  887383 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:12:39.555035  887383 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:12:40.397213  887383 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:12:40.397491  887383 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:12:40.893425  887383 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:12:41.622449  887383 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:12:41.997157  887383 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:12:43.195742  887383 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:12:43.196559  887383 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:12:43.199360  887383 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:12:43.202893  887383 out.go:252]   - Booting up control plane ...
	I1026 15:12:43.203008  887383 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:12:43.203086  887383 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:12:43.203153  887383 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:12:43.219668  887383 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:12:43.220659  887383 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:12:43.221071  887383 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:12:43.360935  887383 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 15:12:51.365562  887383 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.004869 seconds
	I1026 15:12:51.365836  887383 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:12:51.388339  887383 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:12:51.917178  887383 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:12:51.917675  887383 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-304880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:12:52.431213  887383 kubeadm.go:318] [bootstrap-token] Using token: ctma1k.ls12di9pv6b0i306
	I1026 15:12:52.434128  887383 out.go:252]   - Configuring RBAC rules ...
	I1026 15:12:52.434258  887383 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:12:52.439511  887383 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:12:52.448804  887383 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:12:52.456491  887383 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:12:52.461284  887383 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:12:52.467025  887383 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:12:52.485902  887383 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:12:52.800264  887383 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:12:52.853697  887383 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:12:52.855302  887383 kubeadm.go:318] 
	I1026 15:12:52.855378  887383 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:12:52.855388  887383 kubeadm.go:318] 
	I1026 15:12:52.855465  887383 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:12:52.855474  887383 kubeadm.go:318] 
	I1026 15:12:52.855500  887383 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:12:52.855562  887383 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:12:52.855616  887383 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:12:52.855625  887383 kubeadm.go:318] 
	I1026 15:12:52.855679  887383 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:12:52.855687  887383 kubeadm.go:318] 
	I1026 15:12:52.855735  887383 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:12:52.855743  887383 kubeadm.go:318] 
	I1026 15:12:52.855795  887383 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:12:52.855873  887383 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:12:52.855944  887383 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:12:52.855952  887383 kubeadm.go:318] 
	I1026 15:12:52.856035  887383 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:12:52.856116  887383 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:12:52.856125  887383 kubeadm.go:318] 
	I1026 15:12:52.856209  887383 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ctma1k.ls12di9pv6b0i306 \
	I1026 15:12:52.856315  887383 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:12:52.856339  887383 kubeadm.go:318] 	--control-plane 
	I1026 15:12:52.856347  887383 kubeadm.go:318] 
	I1026 15:12:52.856431  887383 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:12:52.856439  887383 kubeadm.go:318] 
	I1026 15:12:52.856535  887383 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ctma1k.ls12di9pv6b0i306 \
	I1026 15:12:52.856640  887383 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:12:52.862671  887383 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:12:52.862800  887383 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:12:52.862841  887383 cni.go:84] Creating CNI manager for ""
	I1026 15:12:52.862857  887383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:52.868584  887383 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:12:52.871402  887383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:12:52.881385  887383 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1026 15:12:52.881409  887383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:12:52.938521  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:12:54.045221  887383 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.106657388s)
	I1026 15:12:54.045258  887383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:12:54.045383  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:54.045449  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-304880 minikube.k8s.io/updated_at=2025_10_26T15_12_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=old-k8s-version-304880 minikube.k8s.io/primary=true
	I1026 15:12:54.076909  887383 ops.go:34] apiserver oom_adj: -16
	I1026 15:12:54.276525  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:54.776790  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:55.277444  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:55.777561  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:56.277595  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:56.777603  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:57.276764  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:57.777454  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:58.276645  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:58.777033  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:59.277617  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:59.776683  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:00.276676  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:00.777304  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:01.276810  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:01.777210  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:02.276807  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:02.777218  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:03.276720  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:03.776901  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:04.277558  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:04.777429  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:05.276848  887383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:05.469939  887383 kubeadm.go:1113] duration metric: took 11.424587128s to wait for elevateKubeSystemPrivileges
	I1026 15:13:05.469970  887383 kubeadm.go:402] duration metric: took 30.246946393s to StartCluster
	I1026 15:13:05.469989  887383 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:05.470052  887383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:05.471027  887383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:05.471267  887383 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:05.471401  887383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:13:05.471652  887383 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:05.471695  887383 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:05.471757  887383 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-304880"
	I1026 15:13:05.471772  887383 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-304880"
	I1026 15:13:05.471796  887383 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:05.472321  887383 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-304880"
	I1026 15:13:05.472343  887383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-304880"
	I1026 15:13:05.472432  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:05.472649  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:05.474836  887383 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:05.483651  887383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:05.514164  887383 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-304880"
	I1026 15:13:05.514206  887383 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:05.514627  887383 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:05.518812  887383 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:05.521696  887383 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:05.521719  887383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:05.521868  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:05.569474  887383 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:05.569495  887383 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:05.569558  887383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:05.576230  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:05.606325  887383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:05.850572  887383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:05.900680  887383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:05.934399  887383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:13:05.934515  887383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:06.833767  887383 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-304880" to be "Ready" ...
	I1026 15:13:06.834154  887383 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 15:13:06.896427  887383 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:13:06.899407  887383 addons.go:514] duration metric: took 1.427688296s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:13:07.338487  887383 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-304880" context rescaled to 1 replicas
	W1026 15:13:08.836971  887383 node_ready.go:57] node "old-k8s-version-304880" has "Ready":"False" status (will retry)
	W1026 15:13:10.837385  887383 node_ready.go:57] node "old-k8s-version-304880" has "Ready":"False" status (will retry)
	W1026 15:13:12.841956  887383 node_ready.go:57] node "old-k8s-version-304880" has "Ready":"False" status (will retry)
	W1026 15:13:15.337227  887383 node_ready.go:57] node "old-k8s-version-304880" has "Ready":"False" status (will retry)
	W1026 15:13:17.837462  887383 node_ready.go:57] node "old-k8s-version-304880" has "Ready":"False" status (will retry)
	I1026 15:13:19.841847  887383 node_ready.go:49] node "old-k8s-version-304880" is "Ready"
	I1026 15:13:19.841876  887383 node_ready.go:38] duration metric: took 13.008026697s for node "old-k8s-version-304880" to be "Ready" ...
	I1026 15:13:19.841897  887383 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:19.841955  887383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:19.855419  887383 api_server.go:72] duration metric: took 14.384114988s to wait for apiserver process to appear ...
	I1026 15:13:19.855445  887383 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:19.855469  887383 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:19.864236  887383 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:13:19.865688  887383 api_server.go:141] control plane version: v1.28.0
	I1026 15:13:19.865717  887383 api_server.go:131] duration metric: took 10.263754ms to wait for apiserver health ...
	I1026 15:13:19.865726  887383 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:19.873226  887383 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:19.873273  887383 system_pods.go:61] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:19.873282  887383 system_pods.go:61] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running
	I1026 15:13:19.873297  887383 system_pods.go:61] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:13:19.873310  887383 system_pods.go:61] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running
	I1026 15:13:19.873315  887383 system_pods.go:61] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running
	I1026 15:13:19.873321  887383 system_pods.go:61] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:13:19.873335  887383 system_pods.go:61] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running
	I1026 15:13:19.873346  887383 system_pods.go:61] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:19.873356  887383 system_pods.go:74] duration metric: took 7.622616ms to wait for pod list to return data ...
	I1026 15:13:19.873364  887383 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:19.879128  887383 default_sa.go:45] found service account: "default"
	I1026 15:13:19.879151  887383 default_sa.go:55] duration metric: took 5.782054ms for default service account to be created ...
	I1026 15:13:19.879160  887383 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:13:19.886190  887383 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:19.886224  887383 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:19.886232  887383 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running
	I1026 15:13:19.886239  887383 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:13:19.886244  887383 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running
	I1026 15:13:19.886249  887383 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running
	I1026 15:13:19.886253  887383 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:13:19.886257  887383 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running
	I1026 15:13:19.886264  887383 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:19.886294  887383 retry.go:31] will retry after 282.838237ms: missing components: kube-dns
	I1026 15:13:20.174288  887383 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:20.174371  887383 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:20.174393  887383 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running
	I1026 15:13:20.174424  887383 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:13:20.174458  887383 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running
	I1026 15:13:20.174478  887383 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running
	I1026 15:13:20.174508  887383 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:13:20.174526  887383 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running
	I1026 15:13:20.174543  887383 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:13:20.174572  887383 retry.go:31] will retry after 318.964295ms: missing components: kube-dns
	I1026 15:13:20.498723  887383 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:20.498768  887383 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:20.498775  887383 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running
	I1026 15:13:20.498801  887383 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:13:20.498811  887383 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running
	I1026 15:13:20.498817  887383 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running
	I1026 15:13:20.498835  887383 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:13:20.498846  887383 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running
	I1026 15:13:20.498851  887383 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:13:20.498874  887383 retry.go:31] will retry after 475.078017ms: missing components: kube-dns
	I1026 15:13:20.978430  887383 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:20.978480  887383 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:20.978510  887383 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running
	I1026 15:13:20.978526  887383 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:13:20.978532  887383 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running
	I1026 15:13:20.978537  887383 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running
	I1026 15:13:20.978542  887383 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:13:20.978550  887383 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running
	I1026 15:13:20.978554  887383 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:13:20.978591  887383 retry.go:31] will retry after 570.836633ms: missing components: kube-dns
	I1026 15:13:21.553693  887383 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:21.553724  887383 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Running
	I1026 15:13:21.553731  887383 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running
	I1026 15:13:21.553735  887383 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:13:21.553740  887383 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running
	I1026 15:13:21.553763  887383 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running
	I1026 15:13:21.553774  887383 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:13:21.553778  887383 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running
	I1026 15:13:21.553793  887383 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:13:21.553801  887383 system_pods.go:126] duration metric: took 1.674634129s to wait for k8s-apps to be running ...
	I1026 15:13:21.553809  887383 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:21.553880  887383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:21.567418  887383 system_svc.go:56] duration metric: took 13.599044ms WaitForService to wait for kubelet
	I1026 15:13:21.567447  887383 kubeadm.go:586] duration metric: took 16.096149668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:21.567466  887383 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:21.570287  887383 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:13:21.570316  887383 node_conditions.go:123] node cpu capacity is 2
	I1026 15:13:21.570330  887383 node_conditions.go:105] duration metric: took 2.836504ms to run NodePressure ...
	I1026 15:13:21.570343  887383 start.go:241] waiting for startup goroutines ...
	I1026 15:13:21.570350  887383 start.go:246] waiting for cluster config update ...
	I1026 15:13:21.570361  887383 start.go:255] writing updated cluster config ...
	I1026 15:13:21.570655  887383 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:21.574288  887383 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:21.578647  887383 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fdtlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.584000  887383 pod_ready.go:94] pod "coredns-5dd5756b68-fdtlk" is "Ready"
	I1026 15:13:21.584025  887383 pod_ready.go:86] duration metric: took 5.348573ms for pod "coredns-5dd5756b68-fdtlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.587071  887383 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.591663  887383 pod_ready.go:94] pod "etcd-old-k8s-version-304880" is "Ready"
	I1026 15:13:21.591691  887383 pod_ready.go:86] duration metric: took 4.598124ms for pod "etcd-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.594806  887383 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.599847  887383 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-304880" is "Ready"
	I1026 15:13:21.599921  887383 pod_ready.go:86] duration metric: took 5.090782ms for pod "kube-apiserver-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.603420  887383 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:21.979334  887383 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-304880" is "Ready"
	I1026 15:13:21.979367  887383 pod_ready.go:86] duration metric: took 375.919466ms for pod "kube-controller-manager-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:22.179890  887383 pod_ready.go:83] waiting for pod "kube-proxy-rsdnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:22.577952  887383 pod_ready.go:94] pod "kube-proxy-rsdnc" is "Ready"
	I1026 15:13:22.577983  887383 pod_ready.go:86] duration metric: took 398.066939ms for pod "kube-proxy-rsdnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:22.786835  887383 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:23.179585  887383 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-304880" is "Ready"
	I1026 15:13:23.179613  887383 pod_ready.go:86] duration metric: took 392.751244ms for pod "kube-scheduler-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:23.179625  887383 pod_ready.go:40] duration metric: took 1.605306457s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:23.252714  887383 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1026 15:13:23.256096  887383 out.go:203] 
	W1026 15:13:23.259041  887383 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:13:23.262042  887383 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:13:23.265856  887383 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-304880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:13:20 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:20.501881173Z" level=info msg="Created container 96a55c2455452a513d38a60c194e390a8481618b9daac96adb89102452133bbf: kube-system/coredns-5dd5756b68-fdtlk/coredns" id=4b35383f-3e3c-4c12-b418-adaff1c0ef18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:20 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:20.5026955Z" level=info msg="Starting container: 96a55c2455452a513d38a60c194e390a8481618b9daac96adb89102452133bbf" id=cfba737e-0392-46da-af7b-6b489d057cad name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:20 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:20.506523473Z" level=info msg="Started container" PID=1936 containerID=96a55c2455452a513d38a60c194e390a8481618b9daac96adb89102452133bbf description=kube-system/coredns-5dd5756b68-fdtlk/coredns id=cfba737e-0392-46da-af7b-6b489d057cad name=/runtime.v1.RuntimeService/StartContainer sandboxID=d24b654d8723334b04c91ced7aaa2817c3eda91af02f24bc89392b2d4abda2b6
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.799723251Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e758e8a8-cf0b-467e-860a-4421c20a475e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.799797713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.805141264Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7703080f501b1edc7cac722966a3d5537079cc36806fe9b8513d59e9596a65ea UID:e84a2428-1939-453d-bca6-7b2884f6ea51 NetNS:/var/run/netns/66227211-40f2-4d80-a0ae-3f3a2e241d34 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079728}] Aliases:map[]}"
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.805196707Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.818424326Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7703080f501b1edc7cac722966a3d5537079cc36806fe9b8513d59e9596a65ea UID:e84a2428-1939-453d-bca6-7b2884f6ea51 NetNS:/var/run/netns/66227211-40f2-4d80-a0ae-3f3a2e241d34 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079728}] Aliases:map[]}"
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.818576221Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.821171384Z" level=info msg="Ran pod sandbox 7703080f501b1edc7cac722966a3d5537079cc36806fe9b8513d59e9596a65ea with infra container: default/busybox/POD" id=e758e8a8-cf0b-467e-860a-4421c20a475e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.826049716Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16adbf96-f0f7-493f-bb25-eac7f955e938 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.826326412Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=16adbf96-f0f7-493f-bb25-eac7f955e938 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.826379385Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=16adbf96-f0f7-493f-bb25-eac7f955e938 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.828296535Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f1fef84-3ad8-4264-a1cf-e69b41ac0193 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:13:23 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:23.83203549Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.048387245Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6f1fef84-3ad8-4264-a1cf-e69b41ac0193 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.04994188Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac30ecc3-8417-4666-9929-3295c5cca3e8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.052001201Z" level=info msg="Creating container: default/busybox/busybox" id=8741fb74-2f44-42d9-8efb-c213e449036c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.052182413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.059007721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.059778765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.081404017Z" level=info msg="Created container e65314630798b83059d922121a250835e22c99a23c535a7c21b5f887dda07a70: default/busybox/busybox" id=8741fb74-2f44-42d9-8efb-c213e449036c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.082544058Z" level=info msg="Starting container: e65314630798b83059d922121a250835e22c99a23c535a7c21b5f887dda07a70" id=bd6babe4-e313-484d-836c-0687f7826365 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:26 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:26.084213189Z" level=info msg="Started container" PID=1990 containerID=e65314630798b83059d922121a250835e22c99a23c535a7c21b5f887dda07a70 description=default/busybox/busybox id=bd6babe4-e313-484d-836c-0687f7826365 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7703080f501b1edc7cac722966a3d5537079cc36806fe9b8513d59e9596a65ea
	Oct 26 15:13:32 old-k8s-version-304880 crio[838]: time="2025-10-26T15:13:32.667553952Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e65314630798b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   7703080f501b1       busybox                                          default
	96a55c2455452       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   d24b654d87233       coredns-5dd5756b68-fdtlk                         kube-system
	7d56432edfd44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   351f0935d8b6b       storage-provisioner                              kube-system
	ea11a1d9c7331       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   562890656be3f       kindnet-kwb2h                                    kube-system
	9126e72ae5a0f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   be49109e4e906       kube-proxy-rsdnc                                 kube-system
	b74dcdeff3f8e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   6c34603d22a03       kube-controller-manager-old-k8s-version-304880   kube-system
	2e68297616032       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   de063d983cfec       etcd-old-k8s-version-304880                      kube-system
	bf672498be283       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   d028533b634a1       kube-apiserver-old-k8s-version-304880            kube-system
	09b9c34258ea2       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   6f5da044a07d6       kube-scheduler-old-k8s-version-304880            kube-system
	
	
	==> coredns [96a55c2455452a513d38a60c194e390a8481618b9daac96adb89102452133bbf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44459 - 11730 "HINFO IN 6162708355143573043.1730222899057882364. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017698857s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-304880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-304880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=old-k8s-version-304880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_12_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:12:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-304880
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:13:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:13:23 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:13:23 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:13:23 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:13:23 +0000   Sun, 26 Oct 2025 15:13:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-304880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d0d7db31-34b9-4b69-bff7-8420a1723dd8
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-fdtlk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-304880                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-kwb2h                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-304880             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-304880    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-rsdnc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-304880             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-304880 event: Registered Node old-k8s-version-304880 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-304880 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 14:46] overlayfs: idmapped layers are currently not supported
	[Oct26 14:47] overlayfs: idmapped layers are currently not supported
	[Oct26 14:52] overlayfs: idmapped layers are currently not supported
	[Oct26 14:53] overlayfs: idmapped layers are currently not supported
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2e68297616032b5808c50c8003ece8aa86231d968d9ee9c2ac32f54d9fa29324] <==
	{"level":"info","ts":"2025-10-26T15:12:45.498632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-26T15:12:45.49923Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-26T15:12:45.500897Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T15:12:45.501117Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:12:45.50115Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:12:45.501322Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T15:12:45.502633Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T15:12:46.444186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-26T15:12:46.44432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-26T15:12:46.444367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-26T15:12:46.44441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:12:46.444445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T15:12:46.444486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-26T15:12:46.44453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T15:12:46.44659Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:12:46.448935Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-304880 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:12:46.449132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:12:46.44976Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:12:46.449877Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:12:46.449928Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:12:46.44998Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:12:46.452122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T15:12:46.452303Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:12:46.45234Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T15:12:46.461358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 15:13:34 up  4:56,  0 user,  load average: 2.72, 3.72, 2.96
	Linux old-k8s-version-304880 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea11a1d9c733114cd29f9a7c731a6b447d904e0a0cee3160256341d2af44133d] <==
	I1026 15:13:09.134231       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:09.134640       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:13:09.134799       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:09.134839       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:09.134872       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:09.425623       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:09.425768       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:09.425833       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:09.428443       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:13:09.625969       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:13:09.626066       1 metrics.go:72] Registering metrics
	I1026 15:13:09.626136       1 controller.go:711] "Syncing nftables rules"
	I1026 15:13:19.425909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:13:19.425987       1 main.go:301] handling current node
	I1026 15:13:29.425389       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:13:29.425424       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bf672498be283df32e951f51e16983ef21c4f3a4a36c29bec04010a60ac7c7a6] <==
	I1026 15:12:49.514761       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 15:12:49.518313       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 15:12:49.522803       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 15:12:49.533564       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 15:12:49.533680       1 aggregator.go:166] initial CRD sync complete...
	I1026 15:12:49.533717       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 15:12:49.533750       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:12:49.533781       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:12:49.540603       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:12:49.561090       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 15:12:50.127730       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:12:50.133278       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:12:50.133368       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:12:50.777297       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:12:50.825898       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:12:50.938870       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:12:50.948190       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 15:12:50.949443       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 15:12:50.955017       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:12:51.170201       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 15:12:52.775827       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 15:12:52.795779       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:12:52.814917       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 15:13:05.365045       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1026 15:13:05.449914       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b74dcdeff3f8e4c4a1981b6558efa1efe6582883a06f4d11d31c77de5595291f] <==
	I1026 15:13:05.407763       1 shared_informer.go:318] Caches are synced for HPA
	I1026 15:13:05.419090       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1026 15:13:05.420300       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 15:13:05.494936       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 15:13:05.667188       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kwb2h"
	I1026 15:13:05.667230       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-79p5g"
	I1026 15:13:05.722965       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rsdnc"
	I1026 15:13:05.805961       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fdtlk"
	I1026 15:13:05.809133       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:13:05.809157       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 15:13:05.848237       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:13:05.861202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="459.90745ms"
	I1026 15:13:05.949243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.990223ms"
	I1026 15:13:05.949359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.777µs"
	I1026 15:13:06.918764       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1026 15:13:06.952006       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-79p5g"
	I1026 15:13:06.964556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.494556ms"
	I1026 15:13:06.977166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.472346ms"
	I1026 15:13:06.977436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.538µs"
	I1026 15:13:19.558374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.509µs"
	I1026 15:13:19.573059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.053µs"
	I1026 15:13:20.275028       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1026 15:13:21.152458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.233µs"
	I1026 15:13:21.189099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.598318ms"
	I1026 15:13:21.189921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.373µs"
	
	
	==> kube-proxy [9126e72ae5a0f4f1a151911abb117701e3f90f3bad6af909570c474f551f0c1f] <==
	I1026 15:13:06.377486       1 server_others.go:69] "Using iptables proxy"
	I1026 15:13:06.407196       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1026 15:13:06.460422       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:06.469260       1 server_others.go:152] "Using iptables Proxier"
	I1026 15:13:06.469305       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 15:13:06.469313       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 15:13:06.469350       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 15:13:06.469592       1 server.go:846] "Version info" version="v1.28.0"
	I1026 15:13:06.469609       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:06.473132       1 config.go:188] "Starting service config controller"
	I1026 15:13:06.473151       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 15:13:06.473170       1 config.go:97] "Starting endpoint slice config controller"
	I1026 15:13:06.473173       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 15:13:06.473546       1 config.go:315] "Starting node config controller"
	I1026 15:13:06.473553       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 15:13:06.573985       1 shared_informer.go:318] Caches are synced for service config
	I1026 15:13:06.573985       1 shared_informer.go:318] Caches are synced for node config
	I1026 15:13:06.574002       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [09b9c34258ea2cbce201806ec0eff353ee5e73c80e85900ce6c17855a5ea75e1] <==
	W1026 15:12:49.988527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 15:12:49.988571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 15:12:49.988679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 15:12:49.988737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 15:12:49.988815       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 15:12:49.988852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 15:12:49.988943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 15:12:49.988984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 15:12:49.989112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 15:12:49.989545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 15:12:49.989200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 15:12:49.989650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 15:12:49.989245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 15:12:49.989748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 15:12:49.989315       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 15:12:49.989833       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1026 15:12:49.989403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 15:12:49.989915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 15:12:49.989495       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 15:12:49.989998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1026 15:12:49.989529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 15:12:49.990112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 15:12:49.990621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 15:12:49.990675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1026 15:12:50.978333       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: I1026 15:13:06.021224    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5feeb2b9-1888-4036-9214-e75dc8a9bef9-lib-modules\") pod \"kube-proxy-rsdnc\" (UID: \"5feeb2b9-1888-4036-9214-e75dc8a9bef9\") " pod="kube-system/kube-proxy-rsdnc"
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: I1026 15:13:06.021252    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z24x6\" (UniqueName: \"kubernetes.io/projected/5feeb2b9-1888-4036-9214-e75dc8a9bef9-kube-api-access-z24x6\") pod \"kube-proxy-rsdnc\" (UID: \"5feeb2b9-1888-4036-9214-e75dc8a9bef9\") " pod="kube-system/kube-proxy-rsdnc"
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: I1026 15:13:06.124243    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0973c672-640b-42ab-842c-61cacaf8d96e-xtables-lock\") pod \"kindnet-kwb2h\" (UID: \"0973c672-640b-42ab-842c-61cacaf8d96e\") " pod="kube-system/kindnet-kwb2h"
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: I1026 15:13:06.124298    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7wd\" (UniqueName: \"kubernetes.io/projected/0973c672-640b-42ab-842c-61cacaf8d96e-kube-api-access-tb7wd\") pod \"kindnet-kwb2h\" (UID: \"0973c672-640b-42ab-842c-61cacaf8d96e\") " pod="kube-system/kindnet-kwb2h"
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: I1026 15:13:06.124336    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0973c672-640b-42ab-842c-61cacaf8d96e-cni-cfg\") pod \"kindnet-kwb2h\" (UID: \"0973c672-640b-42ab-842c-61cacaf8d96e\") " pod="kube-system/kindnet-kwb2h"
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: I1026 15:13:06.124360    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0973c672-640b-42ab-842c-61cacaf8d96e-lib-modules\") pod \"kindnet-kwb2h\" (UID: \"0973c672-640b-42ab-842c-61cacaf8d96e\") " pod="kube-system/kindnet-kwb2h"
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: W1026 15:13:06.215034    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-be49109e4e9062af618d2c77cc614acf3057bd54bc43490d8abe73c60a742c96 WatchSource:0}: Error finding container be49109e4e9062af618d2c77cc614acf3057bd54bc43490d8abe73c60a742c96: Status 404 returned error can't find the container with id be49109e4e9062af618d2c77cc614acf3057bd54bc43490d8abe73c60a742c96
	Oct 26 15:13:06 old-k8s-version-304880 kubelet[1367]: W1026 15:13:06.536932    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-562890656be3fbfea0ebca004462a617b4132d726354ec37ad337f8ae62d9651 WatchSource:0}: Error finding container 562890656be3fbfea0ebca004462a617b4132d726354ec37ad337f8ae62d9651: Status 404 returned error can't find the container with id 562890656be3fbfea0ebca004462a617b4132d726354ec37ad337f8ae62d9651
	Oct 26 15:13:09 old-k8s-version-304880 kubelet[1367]: I1026 15:13:09.123494    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rsdnc" podStartSLOduration=4.123441593 podCreationTimestamp="2025-10-26 15:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:07.109507505 +0000 UTC m=+14.388385121" watchObservedRunningTime="2025-10-26 15:13:09.123441593 +0000 UTC m=+16.402319201"
	Oct 26 15:13:12 old-k8s-version-304880 kubelet[1367]: I1026 15:13:12.951254    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-kwb2h" podStartSLOduration=5.5044958 podCreationTimestamp="2025-10-26 15:13:05 +0000 UTC" firstStartedPulling="2025-10-26 15:13:06.541089487 +0000 UTC m=+13.819967094" lastFinishedPulling="2025-10-26 15:13:08.987808242 +0000 UTC m=+16.266685850" observedRunningTime="2025-10-26 15:13:09.124667313 +0000 UTC m=+16.403544929" watchObservedRunningTime="2025-10-26 15:13:12.951214556 +0000 UTC m=+20.230092189"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.514494    1367 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.548172    1367 topology_manager.go:215] "Topology Admit Handler" podUID="d765ae9d-1a98-44a0-adef-fdca5334d7de" podNamespace="kube-system" podName="coredns-5dd5756b68-fdtlk"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.560142    1367 topology_manager.go:215] "Topology Admit Handler" podUID="01c26bc9-c6c9-4eed-a838-d364398a7062" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: W1026 15:13:19.560854    1367 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-304880" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-304880' and this object
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: E1026 15:13:19.560915    1367 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-304880" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-304880' and this object
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.722984    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d765ae9d-1a98-44a0-adef-fdca5334d7de-config-volume\") pod \"coredns-5dd5756b68-fdtlk\" (UID: \"d765ae9d-1a98-44a0-adef-fdca5334d7de\") " pod="kube-system/coredns-5dd5756b68-fdtlk"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.723036    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hvv\" (UniqueName: \"kubernetes.io/projected/01c26bc9-c6c9-4eed-a838-d364398a7062-kube-api-access-78hvv\") pod \"storage-provisioner\" (UID: \"01c26bc9-c6c9-4eed-a838-d364398a7062\") " pod="kube-system/storage-provisioner"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.723061    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnwn\" (UniqueName: \"kubernetes.io/projected/d765ae9d-1a98-44a0-adef-fdca5334d7de-kube-api-access-2dnwn\") pod \"coredns-5dd5756b68-fdtlk\" (UID: \"d765ae9d-1a98-44a0-adef-fdca5334d7de\") " pod="kube-system/coredns-5dd5756b68-fdtlk"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: I1026 15:13:19.723091    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/01c26bc9-c6c9-4eed-a838-d364398a7062-tmp\") pod \"storage-provisioner\" (UID: \"01c26bc9-c6c9-4eed-a838-d364398a7062\") " pod="kube-system/storage-provisioner"
	Oct 26 15:13:19 old-k8s-version-304880 kubelet[1367]: W1026 15:13:19.891337    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-351f0935d8b6b702eaf9ee1c895275163bbe272d9603e1c6318b53d1797c1ec4 WatchSource:0}: Error finding container 351f0935d8b6b702eaf9ee1c895275163bbe272d9603e1c6318b53d1797c1ec4: Status 404 returned error can't find the container with id 351f0935d8b6b702eaf9ee1c895275163bbe272d9603e1c6318b53d1797c1ec4
	Oct 26 15:13:20 old-k8s-version-304880 kubelet[1367]: W1026 15:13:20.470445    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-d24b654d8723334b04c91ced7aaa2817c3eda91af02f24bc89392b2d4abda2b6 WatchSource:0}: Error finding container d24b654d8723334b04c91ced7aaa2817c3eda91af02f24bc89392b2d4abda2b6: Status 404 returned error can't find the container with id d24b654d8723334b04c91ced7aaa2817c3eda91af02f24bc89392b2d4abda2b6
	Oct 26 15:13:21 old-k8s-version-304880 kubelet[1367]: I1026 15:13:21.149603    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.149557419 podCreationTimestamp="2025-10-26 15:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:20.146118076 +0000 UTC m=+27.424995684" watchObservedRunningTime="2025-10-26 15:13:21.149557419 +0000 UTC m=+28.428435035"
	Oct 26 15:13:21 old-k8s-version-304880 kubelet[1367]: I1026 15:13:21.172215    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fdtlk" podStartSLOduration=16.172161053 podCreationTimestamp="2025-10-26 15:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:21.150485085 +0000 UTC m=+28.429362692" watchObservedRunningTime="2025-10-26 15:13:21.172161053 +0000 UTC m=+28.451038669"
	Oct 26 15:13:23 old-k8s-version-304880 kubelet[1367]: I1026 15:13:23.497786    1367 topology_manager.go:215] "Topology Admit Handler" podUID="e84a2428-1939-453d-bca6-7b2884f6ea51" podNamespace="default" podName="busybox"
	Oct 26 15:13:23 old-k8s-version-304880 kubelet[1367]: I1026 15:13:23.645597    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzn88\" (UniqueName: \"kubernetes.io/projected/e84a2428-1939-453d-bca6-7b2884f6ea51-kube-api-access-mzn88\") pod \"busybox\" (UID: \"e84a2428-1939-453d-bca6-7b2884f6ea51\") " pod="default/busybox"
	
	
	==> storage-provisioner [7d56432edfd445e658ca1112aab10a2280472dcd1769be1343af2501212e90c0] <==
	I1026 15:13:19.941851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:13:19.956601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:13:19.956753       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 15:13:19.966902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:13:19.967192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-304880_22049974-a70f-4aed-8b45-dbc64165f924!
	I1026 15:13:19.967779       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a57b129-0642-4616-9dc6-f67d3e08867c", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-304880_22049974-a70f-4aed-8b45-dbc64165f924 became leader
	I1026 15:13:20.068277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-304880_22049974-a70f-4aed-8b45-dbc64165f924!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-304880 -n old-k8s-version-304880
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-304880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-304880 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-304880 --alsologtostderr -v=1: exit status 80 (2.066128408s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-304880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:14:46.677437  893210 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:46.677733  893210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:46.677760  893210 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:46.677780  893210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:46.678112  893210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:14:46.678408  893210 out.go:368] Setting JSON to false
	I1026 15:14:46.678450  893210 mustload.go:65] Loading cluster: old-k8s-version-304880
	I1026 15:14:46.678939  893210 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:14:46.679465  893210 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:14:46.698723  893210 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:14:46.699048  893210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:46.769698  893210 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 15:14:46.758997614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:14:46.770437  893210 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-304880 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:14:46.773934  893210 out.go:179] * Pausing node old-k8s-version-304880 ... 
	I1026 15:14:46.777178  893210 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:14:46.777653  893210 ssh_runner.go:195] Run: systemctl --version
	I1026 15:14:46.777705  893210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:14:46.795575  893210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:14:46.903550  893210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:46.917117  893210 pause.go:52] kubelet running: true
	I1026 15:14:46.917192  893210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:14:47.164190  893210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:14:47.164274  893210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:14:47.233050  893210 cri.go:89] found id: "21b2e5379ae9d30caf86aad0ff02e62fe2339f039ce9266f17232ea235ddec07"
	I1026 15:14:47.233079  893210 cri.go:89] found id: "9f4870ebe7fda1cfed09a3942ae73022c0b81fb1a481240641c3e32e44de7666"
	I1026 15:14:47.233084  893210 cri.go:89] found id: "d5f4f97f50786460aae350051a6ee4871267ad06cf23f9c831680891272c419d"
	I1026 15:14:47.233088  893210 cri.go:89] found id: "712f20d7bb2d38f8edba961e0c44bda92a7a3f6c0da47f9d03c382368a373990"
	I1026 15:14:47.233092  893210 cri.go:89] found id: "484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047"
	I1026 15:14:47.233095  893210 cri.go:89] found id: "7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf"
	I1026 15:14:47.233098  893210 cri.go:89] found id: "8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d"
	I1026 15:14:47.233101  893210 cri.go:89] found id: "940c72d34c2c196e0a7e52a95d277e21da8b2e50a64301dc1c33710098582c12"
	I1026 15:14:47.233105  893210 cri.go:89] found id: "bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be"
	I1026 15:14:47.233112  893210 cri.go:89] found id: "cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	I1026 15:14:47.233115  893210 cri.go:89] found id: "0262c0af4a8456676c4e3a7de2c2ae2379faa24ef1df396371303d7adacd1785"
	I1026 15:14:47.233118  893210 cri.go:89] found id: ""
	I1026 15:14:47.233172  893210 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:14:47.244666  893210 retry.go:31] will retry after 310.984966ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:47Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:14:47.556224  893210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:47.569735  893210 pause.go:52] kubelet running: false
	I1026 15:14:47.569798  893210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:14:47.746501  893210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:14:47.746648  893210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:14:47.830321  893210 cri.go:89] found id: "21b2e5379ae9d30caf86aad0ff02e62fe2339f039ce9266f17232ea235ddec07"
	I1026 15:14:47.830393  893210 cri.go:89] found id: "9f4870ebe7fda1cfed09a3942ae73022c0b81fb1a481240641c3e32e44de7666"
	I1026 15:14:47.830404  893210 cri.go:89] found id: "d5f4f97f50786460aae350051a6ee4871267ad06cf23f9c831680891272c419d"
	I1026 15:14:47.830409  893210 cri.go:89] found id: "712f20d7bb2d38f8edba961e0c44bda92a7a3f6c0da47f9d03c382368a373990"
	I1026 15:14:47.830413  893210 cri.go:89] found id: "484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047"
	I1026 15:14:47.830417  893210 cri.go:89] found id: "7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf"
	I1026 15:14:47.830420  893210 cri.go:89] found id: "8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d"
	I1026 15:14:47.830424  893210 cri.go:89] found id: "940c72d34c2c196e0a7e52a95d277e21da8b2e50a64301dc1c33710098582c12"
	I1026 15:14:47.830427  893210 cri.go:89] found id: "bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be"
	I1026 15:14:47.830434  893210 cri.go:89] found id: "cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	I1026 15:14:47.830438  893210 cri.go:89] found id: "0262c0af4a8456676c4e3a7de2c2ae2379faa24ef1df396371303d7adacd1785"
	I1026 15:14:47.830441  893210 cri.go:89] found id: ""
	I1026 15:14:47.830516  893210 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:14:47.845565  893210 retry.go:31] will retry after 544.444566ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:47Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:14:48.390299  893210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:48.411245  893210 pause.go:52] kubelet running: false
	I1026 15:14:48.411321  893210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:14:48.591594  893210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:14:48.591681  893210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:14:48.659642  893210 cri.go:89] found id: "21b2e5379ae9d30caf86aad0ff02e62fe2339f039ce9266f17232ea235ddec07"
	I1026 15:14:48.659682  893210 cri.go:89] found id: "9f4870ebe7fda1cfed09a3942ae73022c0b81fb1a481240641c3e32e44de7666"
	I1026 15:14:48.659688  893210 cri.go:89] found id: "d5f4f97f50786460aae350051a6ee4871267ad06cf23f9c831680891272c419d"
	I1026 15:14:48.659692  893210 cri.go:89] found id: "712f20d7bb2d38f8edba961e0c44bda92a7a3f6c0da47f9d03c382368a373990"
	I1026 15:14:48.659695  893210 cri.go:89] found id: "484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047"
	I1026 15:14:48.659699  893210 cri.go:89] found id: "7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf"
	I1026 15:14:48.659702  893210 cri.go:89] found id: "8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d"
	I1026 15:14:48.659706  893210 cri.go:89] found id: "940c72d34c2c196e0a7e52a95d277e21da8b2e50a64301dc1c33710098582c12"
	I1026 15:14:48.659709  893210 cri.go:89] found id: "bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be"
	I1026 15:14:48.659720  893210 cri.go:89] found id: "cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	I1026 15:14:48.659727  893210 cri.go:89] found id: "0262c0af4a8456676c4e3a7de2c2ae2379faa24ef1df396371303d7adacd1785"
	I1026 15:14:48.659731  893210 cri.go:89] found id: ""
	I1026 15:14:48.659783  893210 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:14:48.676503  893210 out.go:203] 
	W1026 15:14:48.679508  893210 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:14:48.679532  893210 out.go:285] * 
	* 
	W1026 15:14:48.686476  893210 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:14:48.689599  893210 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-304880 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-304880
helpers_test.go:243: (dbg) docker inspect old-k8s-version-304880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e",
	        "Created": "2025-10-26T15:12:25.477698676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 891120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:13:47.747469677Z",
	            "FinishedAt": "2025-10-26T15:13:46.928509525Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/hostname",
	        "HostsPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/hosts",
	        "LogPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e-json.log",
	        "Name": "/old-k8s-version-304880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-304880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-304880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e",
	                "LowerDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-304880",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-304880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-304880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-304880",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-304880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ec93ff87f347e63b0fd11108eec4d90870b45569ae9cd510b0dad353e934b18",
	            "SandboxKey": "/var/run/docker/netns/0ec93ff87f34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-304880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:05:99:a0:b0:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "898d058c113eb58f8fe58567875d58d2d8a62f1424e6f7b780d853a2a1be653f",
	                    "EndpointID": "aba588c08dced39ad652c62a838a8d83f7157d1e5be979913051d40ebb2d0f8c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-304880",
	                        "47abca8f012a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880: exit status 2 (369.718124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-304880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-304880 logs -n 25: (1.395846634s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-337407 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo containerd config dump                                                                                                                                                                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo crio config                                                                                                                                                                                                             │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p cilium-337407                                                                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ pause   │ -p pause-013921 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p pause-013921                                                                                                                                                                                                                               │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-963871   │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ delete  │ -p force-systemd-env-969063                                                                                                                                                                                                                   │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ cert-options-209492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ -p cert-options-209492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p old-k8s-version-304880 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:14 UTC │
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                                                                                               │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:47.474228  890986 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:47.474389  890986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:47.474401  890986 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:47.474421  890986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:47.475267  890986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:13:47.475745  890986 out.go:368] Setting JSON to false
	I1026 15:13:47.476666  890986 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17780,"bootTime":1761473848,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:13:47.476795  890986 start.go:141] virtualization:  
	I1026 15:13:47.479941  890986 out.go:179] * [old-k8s-version-304880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:13:47.483725  890986 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:47.483886  890986 notify.go:220] Checking for updates...
	I1026 15:13:47.488442  890986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:47.491382  890986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:47.494326  890986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:13:47.497321  890986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:13:47.500292  890986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:47.503734  890986 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:47.507352  890986 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 15:13:47.510386  890986 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:47.538048  890986 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:13:47.538169  890986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:47.601895  890986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:13:47.592117497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:13:47.602014  890986 docker.go:318] overlay module found
	I1026 15:13:47.605043  890986 out.go:179] * Using the docker driver based on existing profile
	I1026 15:13:47.607801  890986 start.go:305] selected driver: docker
	I1026 15:13:47.607819  890986 start.go:925] validating driver "docker" against &{Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:47.607961  890986 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:47.608754  890986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:47.663611  890986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:13:47.654171032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:13:47.664086  890986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:47.664124  890986 cni.go:84] Creating CNI manager for ""
	I1026 15:13:47.664216  890986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:47.664269  890986 start.go:349] cluster config:
	{Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:47.667411  890986 out.go:179] * Starting "old-k8s-version-304880" primary control-plane node in "old-k8s-version-304880" cluster
	I1026 15:13:47.670217  890986 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:47.673144  890986 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:47.676014  890986 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:13:47.676077  890986 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 15:13:47.676091  890986 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:47.676104  890986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:47.676180  890986 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:13:47.676193  890986 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 15:13:47.676306  890986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json ...
	I1026 15:13:47.695925  890986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:47.695949  890986 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:47.695962  890986 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:47.695984  890986 start.go:360] acquireMachinesLock for old-k8s-version-304880: {Name:mk7199322885b6a14cdd6d843ed9457416dde222 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:47.696051  890986 start.go:364] duration metric: took 36.161µs to acquireMachinesLock for "old-k8s-version-304880"
	I1026 15:13:47.696086  890986 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:13:47.696093  890986 fix.go:54] fixHost starting: 
	I1026 15:13:47.696354  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:47.713733  890986 fix.go:112] recreateIfNeeded on old-k8s-version-304880: state=Stopped err=<nil>
	W1026 15:13:47.713763  890986 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:13:47.716979  890986 out.go:252] * Restarting existing docker container for "old-k8s-version-304880" ...
	I1026 15:13:47.717081  890986 cli_runner.go:164] Run: docker start old-k8s-version-304880
	I1026 15:13:47.974016  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:47.996790  890986 kic.go:430] container "old-k8s-version-304880" state is running.
	I1026 15:13:47.997169  890986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:13:48.020894  890986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json ...
	I1026 15:13:48.021138  890986 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:48.021210  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:48.043045  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:48.043606  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:48.043620  890986 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:48.044307  890986 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:13:51.196730  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-304880
	
	I1026 15:13:51.196762  890986 ubuntu.go:182] provisioning hostname "old-k8s-version-304880"
	I1026 15:13:51.196842  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:51.214370  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:51.214683  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:51.214698  890986 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-304880 && echo "old-k8s-version-304880" | sudo tee /etc/hostname
	I1026 15:13:51.381841  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-304880
	
	I1026 15:13:51.381938  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:51.400047  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:51.400365  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:51.400388  890986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-304880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-304880/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-304880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:51.557597  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:51.557632  890986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:13:51.557666  890986 ubuntu.go:190] setting up certificates
	I1026 15:13:51.557680  890986 provision.go:84] configureAuth start
	I1026 15:13:51.557752  890986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:13:51.575498  890986 provision.go:143] copyHostCerts
	I1026 15:13:51.575570  890986 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:13:51.575591  890986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:13:51.575671  890986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:13:51.575775  890986 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:13:51.575787  890986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:13:51.575821  890986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:13:51.575879  890986 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:13:51.575888  890986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:13:51.575912  890986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:13:51.575963  890986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-304880 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-304880]
	I1026 15:13:52.256636  890986 provision.go:177] copyRemoteCerts
	I1026 15:13:52.256721  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:52.256764  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.275202  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:52.381627  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:52.399925  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 15:13:52.426605  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:13:52.445584  890986 provision.go:87] duration metric: took 887.890224ms to configureAuth
	I1026 15:13:52.445611  890986 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:52.445816  890986 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:52.445935  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.464465  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:52.464847  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:52.464866  890986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:52.785060  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:52.785085  890986 machine.go:96] duration metric: took 4.76393787s to provisionDockerMachine
	I1026 15:13:52.785097  890986 start.go:293] postStartSetup for "old-k8s-version-304880" (driver="docker")
	I1026 15:13:52.785134  890986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:52.785205  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:52.785263  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.804971  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:52.914319  890986 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:52.917787  890986 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:52.917857  890986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:52.917875  890986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:13:52.917939  890986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:13:52.918024  890986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:13:52.918131  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:52.925765  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:13:52.943799  890986 start.go:296] duration metric: took 158.685904ms for postStartSetup
	I1026 15:13:52.943927  890986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:52.943976  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.962218  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:53.070065  890986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:53.076292  890986 fix.go:56] duration metric: took 5.380192252s for fixHost
	I1026 15:13:53.076318  890986 start.go:83] releasing machines lock for "old-k8s-version-304880", held for 5.380252404s
	I1026 15:13:53.076400  890986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:13:53.095248  890986 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:53.095317  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:53.095598  890986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:53.095676  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:53.122781  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:53.127927  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:53.224589  890986 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:53.325950  890986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:53.363361  890986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:53.369129  890986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:53.369227  890986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:53.379244  890986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:53.379270  890986 start.go:495] detecting cgroup driver to use...
	I1026 15:13:53.379334  890986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:13:53.379420  890986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:53.394909  890986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:53.409537  890986 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:53.409602  890986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:53.426144  890986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:53.440347  890986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:53.568275  890986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:53.682109  890986 docker.go:234] disabling docker service ...
	I1026 15:13:53.682193  890986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:53.698071  890986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:53.712827  890986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:53.832775  890986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:53.949752  890986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:53.967582  890986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:53.982923  890986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 15:13:53.983012  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:53.994636  890986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:13:53.994728  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.006726  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.018623  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.028547  890986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:54.037612  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.047525  890986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.056845  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.066618  890986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:54.074788  890986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:54.082761  890986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:54.207289  890986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:54.338046  890986 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:54.338116  890986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:54.342105  890986 start.go:563] Will wait 60s for crictl version
	I1026 15:13:54.342167  890986 ssh_runner.go:195] Run: which crictl
	I1026 15:13:54.345705  890986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:54.384155  890986 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:54.384320  890986 ssh_runner.go:195] Run: crio --version
	I1026 15:13:54.414262  890986 ssh_runner.go:195] Run: crio --version
	I1026 15:13:54.450305  890986 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 15:13:54.453203  890986 cli_runner.go:164] Run: docker network inspect old-k8s-version-304880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:54.468823  890986 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:54.472816  890986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:54.483328  890986 kubeadm.go:883] updating cluster {Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:54.483448  890986 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:13:54.483515  890986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:54.521080  890986 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:54.521108  890986 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:54.521166  890986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:54.550921  890986 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:54.550943  890986 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:54.550951  890986 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1026 15:13:54.551062  890986 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-304880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:54.551146  890986 ssh_runner.go:195] Run: crio config
	I1026 15:13:54.615360  890986 cni.go:84] Creating CNI manager for ""
	I1026 15:13:54.615386  890986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:54.615409  890986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:54.615432  890986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-304880 NodeName:old-k8s-version-304880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:54.615569  890986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-304880"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:54.615639  890986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 15:13:54.623741  890986 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:54.623841  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:54.631588  890986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 15:13:54.644526  890986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:54.658001  890986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1026 15:13:54.671950  890986 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:54.675974  890986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:54.685805  890986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:54.813185  890986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:54.831902  890986 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880 for IP: 192.168.76.2
	I1026 15:13:54.831924  890986 certs.go:195] generating shared ca certs ...
	I1026 15:13:54.831941  890986 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:54.832083  890986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:13:54.832136  890986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:13:54.832156  890986 certs.go:257] generating profile certs ...
	I1026 15:13:54.832253  890986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.key
	I1026 15:13:54.832322  890986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key.2229c60e
	I1026 15:13:54.832365  890986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key
	I1026 15:13:54.832495  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:13:54.832533  890986 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:54.832548  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:54.832585  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:54.832610  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:54.832633  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:54.832687  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:13:54.833503  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:54.860943  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:13:54.885327  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:54.907184  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:13:54.932224  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:13:54.957998  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:13:54.987735  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:55.008336  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:55.040780  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:13:55.066014  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:13:55.085580  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:55.105321  890986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:55.119350  890986 ssh_runner.go:195] Run: openssl version
	I1026 15:13:55.125820  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:13:55.135077  890986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:13:55.138991  890986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:13:55.139113  890986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:13:55.186442  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:55.194627  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:55.203809  890986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.207540  890986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.207604  890986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.248334  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:55.256485  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:13:55.264983  890986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:13:55.269265  890986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:13:55.269378  890986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:13:55.310617  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:55.318431  890986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:55.322035  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:55.362971  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:55.405826  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:55.446659  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:55.493264  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:55.546719  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:55.602320  890986 kubeadm.go:400] StartCluster: {Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:55.602461  890986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:55.602565  890986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:55.703282  890986 cri.go:89] found id: "7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf"
	I1026 15:13:55.703357  890986 cri.go:89] found id: "8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d"
	I1026 15:13:55.703394  890986 cri.go:89] found id: "bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be"
	I1026 15:13:55.703425  890986 cri.go:89] found id: ""
	I1026 15:13:55.703510  890986 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:55.731077  890986 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:55Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:55.731224  890986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:55.749773  890986 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:55.749845  890986 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:55.749937  890986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:55.762780  890986 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:55.763464  890986 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-304880" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:55.763784  890986 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-304880" cluster setting kubeconfig missing "old-k8s-version-304880" context setting]
	I1026 15:13:55.764354  890986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.766152  890986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:55.782644  890986 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:13:55.782718  890986 kubeadm.go:601] duration metric: took 32.853208ms to restartPrimaryControlPlane
	I1026 15:13:55.782748  890986 kubeadm.go:402] duration metric: took 180.434142ms to StartCluster
	I1026 15:13:55.782793  890986 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.782876  890986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:55.783808  890986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.784081  890986 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:55.784490  890986 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:55.784463  890986 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:55.784556  890986 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-304880"
	I1026 15:13:55.784582  890986 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-304880"
	W1026 15:13:55.784596  890986 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:55.784619  890986 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:55.784844  890986 addons.go:69] Setting dashboard=true in profile "old-k8s-version-304880"
	I1026 15:13:55.784873  890986 addons.go:238] Setting addon dashboard=true in "old-k8s-version-304880"
	W1026 15:13:55.784940  890986 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:55.784974  890986 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:55.785137  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.785604  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.786137  890986 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-304880"
	I1026 15:13:55.786163  890986 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-304880"
	I1026 15:13:55.786450  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.789924  890986 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:55.795311  890986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:55.830866  890986 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:13:55.840818  890986 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:55.845706  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:55.845735  890986 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:55.845821  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:55.855843  890986 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-304880"
	W1026 15:13:55.855875  890986 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:55.855900  890986 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:55.856305  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.859706  890986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:55.864532  890986 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:55.864555  890986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:55.864633  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:55.921081  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:55.923654  890986 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:55.923674  890986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:55.923736  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:55.939114  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:55.958452  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:56.153576  890986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:56.169756  890986 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-304880" to be "Ready" ...
	I1026 15:13:56.251239  890986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:56.276050  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:56.276075  890986 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:56.340470  890986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:56.342743  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:56.342769  890986 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:56.400637  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:56.400667  890986 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:56.490266  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:56.490290  890986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:56.546850  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:56.546877  890986 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:56.610697  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:56.610724  890986 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:56.634623  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:56.634650  890986 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:56.659701  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:56.659729  890986 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:56.683484  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:56.683511  890986 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:56.706305  890986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:14:00.945329  890986 node_ready.go:49] node "old-k8s-version-304880" is "Ready"
	I1026 15:14:00.945361  890986 node_ready.go:38] duration metric: took 4.775564981s for node "old-k8s-version-304880" to be "Ready" ...
	I1026 15:14:00.945375  890986 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:14:00.945437  890986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:14:02.683511  890986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.432235997s)
	I1026 15:14:02.683574  890986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.343082294s)
	I1026 15:14:03.260605  890986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.554224354s)
	I1026 15:14:03.260952  890986 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.315488527s)
	I1026 15:14:03.260986  890986 api_server.go:72] duration metric: took 7.476854114s to wait for apiserver process to appear ...
	I1026 15:14:03.260993  890986 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:14:03.261010  890986 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:14:03.264154  890986 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-304880 addons enable metrics-server
	
	I1026 15:14:03.267174  890986 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1026 15:14:03.271097  890986 addons.go:514] duration metric: took 7.48662686s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 15:14:03.272968  890986 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:14:03.274875  890986 api_server.go:141] control plane version: v1.28.0
	I1026 15:14:03.274903  890986 api_server.go:131] duration metric: took 13.901711ms to wait for apiserver health ...
	I1026 15:14:03.274914  890986 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:14:03.279121  890986 system_pods.go:59] 8 kube-system pods found
	I1026 15:14:03.279191  890986 system_pods.go:61] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:03.279210  890986 system_pods.go:61] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:03.279218  890986 system_pods.go:61] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:14:03.279231  890986 system_pods.go:61] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:14:03.279238  890986 system_pods.go:61] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:14:03.279248  890986 system_pods.go:61] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:14:03.279271  890986 system_pods.go:61] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:14:03.279283  890986 system_pods.go:61] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:14:03.279290  890986 system_pods.go:74] duration metric: took 4.370502ms to wait for pod list to return data ...
	I1026 15:14:03.279304  890986 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:14:03.286352  890986 default_sa.go:45] found service account: "default"
	I1026 15:14:03.286389  890986 default_sa.go:55] duration metric: took 7.077117ms for default service account to be created ...
	I1026 15:14:03.286400  890986 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:14:03.291601  890986 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:03.291635  890986 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:03.291647  890986 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:03.291653  890986 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:14:03.291694  890986 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:14:03.291702  890986 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:14:03.291713  890986 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:14:03.291720  890986 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:14:03.291728  890986 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:14:03.291754  890986 system_pods.go:126] duration metric: took 5.348146ms to wait for k8s-apps to be running ...
	I1026 15:14:03.291770  890986 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:14:03.291853  890986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:03.306822  890986 system_svc.go:56] duration metric: took 15.036484ms WaitForService to wait for kubelet
	I1026 15:14:03.306863  890986 kubeadm.go:586] duration metric: took 7.52272895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:03.306883  890986 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:14:03.310070  890986 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:14:03.310102  890986 node_conditions.go:123] node cpu capacity is 2
	I1026 15:14:03.310114  890986 node_conditions.go:105] duration metric: took 3.225677ms to run NodePressure ...
	I1026 15:14:03.310150  890986 start.go:241] waiting for startup goroutines ...
	I1026 15:14:03.310163  890986 start.go:246] waiting for cluster config update ...
	I1026 15:14:03.310173  890986 start.go:255] writing updated cluster config ...
	I1026 15:14:03.310452  890986 ssh_runner.go:195] Run: rm -f paused
	I1026 15:14:03.315013  890986 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:03.320155  890986 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fdtlk" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:14:05.326632  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:07.827200  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:10.325563  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:12.326319  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:14.855676  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:17.326724  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:19.327583  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:21.327856  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:23.328287  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:25.825563  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:27.826545  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:29.826955  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:32.326660  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	I1026 15:14:33.327137  890986 pod_ready.go:94] pod "coredns-5dd5756b68-fdtlk" is "Ready"
	I1026 15:14:33.327168  890986 pod_ready.go:86] duration metric: took 30.006983516s for pod "coredns-5dd5756b68-fdtlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.330177  890986 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.335496  890986 pod_ready.go:94] pod "etcd-old-k8s-version-304880" is "Ready"
	I1026 15:14:33.335524  890986 pod_ready.go:86] duration metric: took 5.316137ms for pod "etcd-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.339184  890986 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.344631  890986 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-304880" is "Ready"
	I1026 15:14:33.344661  890986 pod_ready.go:86] duration metric: took 5.451277ms for pod "kube-apiserver-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.347910  890986 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.524891  890986 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-304880" is "Ready"
	I1026 15:14:33.524921  890986 pod_ready.go:86] duration metric: took 176.982012ms for pod "kube-controller-manager-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.724953  890986 pod_ready.go:83] waiting for pod "kube-proxy-rsdnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.124955  890986 pod_ready.go:94] pod "kube-proxy-rsdnc" is "Ready"
	I1026 15:14:34.124984  890986 pod_ready.go:86] duration metric: took 400.003077ms for pod "kube-proxy-rsdnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.324650  890986 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.724771  890986 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-304880" is "Ready"
	I1026 15:14:34.724804  890986 pod_ready.go:86] duration metric: took 400.127468ms for pod "kube-scheduler-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.724819  890986 pod_ready.go:40] duration metric: took 31.409741933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:34.789450  890986 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1026 15:14:34.792687  890986 out.go:203] 
	W1026 15:14:34.795785  890986 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:14:34.798717  890986 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:14:34.801795  890986 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-304880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.530677844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.538418862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.53896711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.557165664Z" level=info msg="Created container cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq/dashboard-metrics-scraper" id=ab27a799-7658-40a1-bff7-74c1772457df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.558237167Z" level=info msg="Starting container: cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7" id=99a5a3fd-a5f4-4058-a4fc-e6663105826c name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.560495802Z" level=info msg="Started container" PID=1632 containerID=cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq/dashboard-metrics-scraper id=99a5a3fd-a5f4-4058-a4fc-e6663105826c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a
	Oct 26 15:14:34 old-k8s-version-304880 conmon[1630]: conmon cd6350fc96d6707c4f20 <ninfo>: container 1632 exited with status 1
	Oct 26 15:14:35 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:35.243000608Z" level=info msg="Removing container: f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e" id=f5a99802-ffc2-427f-836b-1c839509aac0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:35 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:35.260336325Z" level=info msg="Error loading conmon cgroup of container f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e: cgroup deleted" id=f5a99802-ffc2-427f-836b-1c839509aac0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:35 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:35.264032884Z" level=info msg="Removed container f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq/dashboard-metrics-scraper" id=f5a99802-ffc2-427f-836b-1c839509aac0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.944664767Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.952878939Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.95291491Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.952938853Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.956346719Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.956378161Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.95640043Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.960229511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.960267879Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.96029178Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.963970525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.964015088Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.964038801Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.967155061Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.967323875Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	cd6350fc96d67       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   d243eadf443ed       dashboard-metrics-scraper-5f989dc9cf-g4bbq       kubernetes-dashboard
	21b2e5379ae9d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   9782359542c60       storage-provisioner                              kube-system
	0262c0af4a845       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   2dba0ffac99a5       kubernetes-dashboard-8694d4445c-t54nl            kubernetes-dashboard
	9f4870ebe7fda       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   f93fc5d9a248e       coredns-5dd5756b68-fdtlk                         kube-system
	59cfc8e2ced06       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   0d10b794441ec       busybox                                          default
	d5f4f97f50786       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   dea6d0b521dcd       kindnet-kwb2h                                    kube-system
	712f20d7bb2d3       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           48 seconds ago      Running             kube-proxy                  1                   8cd1d8a1ab4fb       kube-proxy-rsdnc                                 kube-system
	484bea0c25b53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   9782359542c60       storage-provisioner                              kube-system
	7fb91d6b4b519       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   98c7b68c41ea7       kube-apiserver-old-k8s-version-304880            kube-system
	8a82a194df0d6       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   ed700be9fdbab       etcd-old-k8s-version-304880                      kube-system
	940c72d34c2c1       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   43ce8ed7b69d8       kube-scheduler-old-k8s-version-304880            kube-system
	bc5d06093202e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   c9d4c18719f24       kube-controller-manager-old-k8s-version-304880   kube-system
	
	
	==> coredns [9f4870ebe7fda1cfed09a3942ae73022c0b81fb1a481240641c3e32e44de7666] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32939 - 47043 "HINFO IN 8841256784006009513.7448725774595104335. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015605622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-304880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-304880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=old-k8s-version-304880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_12_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:12:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-304880
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:13:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-304880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d0d7db31-34b9-4b69-bff7-8420a1723dd8
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-fdtlk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-304880                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-kwb2h                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-304880             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-304880    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-rsdnc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-304880             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-g4bbq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-t54nl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-304880 event: Registered Node old-k8s-version-304880 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-304880 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                  node-controller  Node old-k8s-version-304880 event: Registered Node old-k8s-version-304880 in Controller
	
	
	==> dmesg <==
	[Oct26 14:47] overlayfs: idmapped layers are currently not supported
	[Oct26 14:52] overlayfs: idmapped layers are currently not supported
	[Oct26 14:53] overlayfs: idmapped layers are currently not supported
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d] <==
	{"level":"info","ts":"2025-10-26T15:13:56.249451Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:13:56.249479Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:13:56.249556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T15:13:56.249563Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T15:13:56.249981Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:13:56.250011Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:13:56.250025Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:13:56.252077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-26T15:13:56.252158Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-26T15:13:56.27301Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:13:56.273086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:13:57.124732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T15:13:57.124847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:13:57.124891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T15:13:57.124929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.124959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.124998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.125031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.136955Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-304880 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:13:57.137227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:13:57.138227Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-26T15:13:57.138639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:13:57.139574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T15:13:57.176752Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:13:57.176861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 15:14:50 up  4:57,  0 user,  load average: 2.10, 3.34, 2.89
	Linux old-k8s-version-304880 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5f4f97f50786460aae350051a6ee4871267ad06cf23f9c831680891272c419d] <==
	I1026 15:14:01.737493       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:14:01.737762       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:14:01.737909       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:14:01.737927       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:14:01.737939       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:14:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:14:01.941101       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:14:01.944609       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:14:01.944844       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:14:01.945036       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:14:31.942135       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:14:31.951839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:14:31.951991       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:14:31.952920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:14:33.545953       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:14:33.545989       1 metrics.go:72] Registering metrics
	I1026 15:14:33.546054       1 controller.go:711] "Syncing nftables rules"
	I1026 15:14:41.944310       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:41.944364       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf] <==
	I1026 15:14:00.960456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:14:01.002047       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:14:01.061742       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 15:14:01.061932       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 15:14:01.061977       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 15:14:01.062924       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 15:14:01.069085       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 15:14:01.069210       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 15:14:01.069412       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 15:14:01.070412       1 aggregator.go:166] initial CRD sync complete...
	I1026 15:14:01.070473       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 15:14:01.070501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:14:01.070529       1 cache.go:39] Caches are synced for autoregister controller
	E1026 15:14:01.201470       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:14:01.685247       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:14:03.075378       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 15:14:03.125738       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 15:14:03.153649       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:14:03.167243       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:14:03.177103       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 15:14:03.234897       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.154.238"}
	I1026 15:14:03.253367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.165"}
	I1026 15:14:13.945455       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:14:14.043463       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 15:14:14.145792       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be] <==
	I1026 15:14:14.154666       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1026 15:14:14.203673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="429.856651ms"
	I1026 15:14:14.203869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.862µs"
	I1026 15:14:14.206534       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-t54nl"
	I1026 15:14:14.211556       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	I1026 15:14:14.221753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.34746ms"
	I1026 15:14:14.237029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.932918ms"
	I1026 15:14:14.237302       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:14:14.259379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.300015ms"
	I1026 15:14:14.260127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.83µs"
	I1026 15:14:14.265377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.568938ms"
	I1026 15:14:14.274747       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:14:14.274779       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 15:14:14.280111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.991µs"
	I1026 15:14:14.295858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.421681ms"
	I1026 15:14:14.295955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.553µs"
	I1026 15:14:20.226547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.427643ms"
	I1026 15:14:20.226640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.319µs"
	I1026 15:14:24.225732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.129µs"
	I1026 15:14:25.223931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="130.504µs"
	I1026 15:14:26.225719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.867µs"
	I1026 15:14:33.221368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.341351ms"
	I1026 15:14:33.221748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.5µs"
	I1026 15:14:35.276025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.533µs"
	I1026 15:14:44.542006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.02µs"
	
	
	==> kube-proxy [712f20d7bb2d38f8edba961e0c44bda92a7a3f6c0da47f9d03c382368a373990] <==
	I1026 15:14:01.966706       1 server_others.go:69] "Using iptables proxy"
	I1026 15:14:02.004951       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1026 15:14:02.158803       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:14:02.172915       1 server_others.go:152] "Using iptables Proxier"
	I1026 15:14:02.172959       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 15:14:02.172968       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 15:14:02.173006       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 15:14:02.173268       1 server.go:846] "Version info" version="v1.28.0"
	I1026 15:14:02.173285       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:02.178949       1 config.go:188] "Starting service config controller"
	I1026 15:14:02.178975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 15:14:02.178994       1 config.go:97] "Starting endpoint slice config controller"
	I1026 15:14:02.178998       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 15:14:02.179442       1 config.go:315] "Starting node config controller"
	I1026 15:14:02.179449       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 15:14:02.280178       1 shared_informer.go:318] Caches are synced for node config
	I1026 15:14:02.281716       1 shared_informer.go:318] Caches are synced for service config
	I1026 15:14:02.281780       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [940c72d34c2c196e0a7e52a95d277e21da8b2e50a64301dc1c33710098582c12] <==
	I1026 15:13:59.420849       1 serving.go:348] Generated self-signed cert in-memory
	I1026 15:14:01.752502       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 15:14:01.752554       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:01.771933       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 15:14:01.772673       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 15:14:01.772609       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:14:01.774800       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:14:01.772643       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:14:01.775265       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 15:14:01.780767       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 15:14:01.780809       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 15:14:01.873193       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 15:14:01.875244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:14:01.877043       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.226025     770 topology_manager.go:215] "Topology Admit Handler" podUID="83f76863-0199-4177-997c-97bcca0ded43" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336266     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/83f76863-0199-4177-997c-97bcca0ded43-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-g4bbq\" (UID: \"83f76863-0199-4177-997c-97bcca0ded43\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336551     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/835824df-847d-402e-b2b4-fa53792bffa6-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-t54nl\" (UID: \"835824df-847d-402e-b2b4-fa53792bffa6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t54nl"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336609     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4spw\" (UniqueName: \"kubernetes.io/projected/83f76863-0199-4177-997c-97bcca0ded43-kube-api-access-c4spw\") pod \"dashboard-metrics-scraper-5f989dc9cf-g4bbq\" (UID: \"83f76863-0199-4177-997c-97bcca0ded43\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336640     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jlz\" (UniqueName: \"kubernetes.io/projected/835824df-847d-402e-b2b4-fa53792bffa6-kube-api-access-79jlz\") pod \"kubernetes-dashboard-8694d4445c-t54nl\" (UID: \"835824df-847d-402e-b2b4-fa53792bffa6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t54nl"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: W1026 15:14:14.552941     770 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-2dba0ffac99a53d6145e17d229a29a453cbdddd0a229581826846ffb31c6f17a WatchSource:0}: Error finding container 2dba0ffac99a53d6145e17d229a29a453cbdddd0a229581826846ffb31c6f17a: Status 404 returned error can't find the container with id 2dba0ffac99a53d6145e17d229a29a453cbdddd0a229581826846ffb31c6f17a
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: W1026 15:14:14.569456     770 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a WatchSource:0}: Error finding container d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a: Status 404 returned error can't find the container with id d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a
	Oct 26 15:14:24 old-k8s-version-304880 kubelet[770]: I1026 15:14:24.202921     770 scope.go:117] "RemoveContainer" containerID="e8f7f0d668c5311fc7fd8b2e37a8397665bbd077a10d829c5328d9c9dfd54975"
	Oct 26 15:14:24 old-k8s-version-304880 kubelet[770]: I1026 15:14:24.222805     770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t54nl" podStartSLOduration=5.307008215 podCreationTimestamp="2025-10-26 15:14:14 +0000 UTC" firstStartedPulling="2025-10-26 15:14:14.558632834 +0000 UTC m=+19.719285660" lastFinishedPulling="2025-10-26 15:14:19.474365877 +0000 UTC m=+24.635018802" observedRunningTime="2025-10-26 15:14:20.211000887 +0000 UTC m=+25.371653713" watchObservedRunningTime="2025-10-26 15:14:24.222741357 +0000 UTC m=+29.383394175"
	Oct 26 15:14:25 old-k8s-version-304880 kubelet[770]: I1026 15:14:25.206420     770 scope.go:117] "RemoveContainer" containerID="e8f7f0d668c5311fc7fd8b2e37a8397665bbd077a10d829c5328d9c9dfd54975"
	Oct 26 15:14:25 old-k8s-version-304880 kubelet[770]: I1026 15:14:25.206730     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:25 old-k8s-version-304880 kubelet[770]: E1026 15:14:25.207005     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:26 old-k8s-version-304880 kubelet[770]: I1026 15:14:26.210738     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:26 old-k8s-version-304880 kubelet[770]: E1026 15:14:26.211028     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:32 old-k8s-version-304880 kubelet[770]: I1026 15:14:32.226711     770 scope.go:117] "RemoveContainer" containerID="484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047"
	Oct 26 15:14:34 old-k8s-version-304880 kubelet[770]: I1026 15:14:34.527944     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:35 old-k8s-version-304880 kubelet[770]: I1026 15:14:35.240508     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:35 old-k8s-version-304880 kubelet[770]: I1026 15:14:35.241140     770 scope.go:117] "RemoveContainer" containerID="cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	Oct 26 15:14:35 old-k8s-version-304880 kubelet[770]: E1026 15:14:35.241423     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:44 old-k8s-version-304880 kubelet[770]: I1026 15:14:44.528355     770 scope.go:117] "RemoveContainer" containerID="cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	Oct 26 15:14:44 old-k8s-version-304880 kubelet[770]: E1026 15:14:44.529174     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:47 old-k8s-version-304880 kubelet[770]: I1026 15:14:47.101772     770 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 15:14:47 old-k8s-version-304880 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:14:47 old-k8s-version-304880 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:14:47 old-k8s-version-304880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0262c0af4a8456676c4e3a7de2c2ae2379faa24ef1df396371303d7adacd1785] <==
	2025/10/26 15:14:19 Using namespace: kubernetes-dashboard
	2025/10/26 15:14:19 Using in-cluster config to connect to apiserver
	2025/10/26 15:14:19 Using secret token for csrf signing
	2025/10/26 15:14:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:14:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:14:19 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 15:14:19 Generating JWE encryption key
	2025/10/26 15:14:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:14:20 Initializing JWE encryption key from synchronized object
	2025/10/26 15:14:20 Creating in-cluster Sidecar client
	2025/10/26 15:14:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:14:20 Serving insecurely on HTTP port: 9090
	2025/10/26 15:14:19 Starting overwatch
	
	
	==> storage-provisioner [21b2e5379ae9d30caf86aad0ff02e62fe2339f039ce9266f17232ea235ddec07] <==
	I1026 15:14:32.290443       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:14:32.305975       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:14:32.306020       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 15:14:49.707183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:14:49.707350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-304880_b7e4c3de-b699-43cb-a187-4da698cde2fa!
	I1026 15:14:49.708026       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a57b129-0642-4616-9dc6-f67d3e08867c", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-304880_b7e4c3de-b699-43cb-a187-4da698cde2fa became leader
	I1026 15:14:49.807776       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-304880_b7e4c3de-b699-43cb-a187-4da698cde2fa!
	
	
	==> storage-provisioner [484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047] <==
	I1026 15:14:01.902484       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:14:31.904960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-304880 -n old-k8s-version-304880
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-304880 -n old-k8s-version-304880: exit status 2 (384.093004ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-304880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-304880
helpers_test.go:243: (dbg) docker inspect old-k8s-version-304880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e",
	        "Created": "2025-10-26T15:12:25.477698676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 891120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:13:47.747469677Z",
	            "FinishedAt": "2025-10-26T15:13:46.928509525Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/hostname",
	        "HostsPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/hosts",
	        "LogPath": "/var/lib/docker/containers/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e-json.log",
	        "Name": "/old-k8s-version-304880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-304880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-304880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e",
	                "LowerDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbbc45f330762c17926e4e472ef12819877c2672917a1f225232dc8e1d1150aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-304880",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-304880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-304880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-304880",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-304880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ec93ff87f347e63b0fd11108eec4d90870b45569ae9cd510b0dad353e934b18",
	            "SandboxKey": "/var/run/docker/netns/0ec93ff87f34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-304880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:05:99:a0:b0:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "898d058c113eb58f8fe58567875d58d2d8a62f1424e6f7b780d853a2a1be653f",
	                    "EndpointID": "aba588c08dced39ad652c62a838a8d83f7157d1e5be979913051d40ebb2d0f8c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-304880",
	                        "47abca8f012a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880: exit status 2 (372.287539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-304880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-304880 logs -n 25: (1.302693578s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-337407 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo containerd config dump                                                                                                                                                                                                  │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo crio config                                                                                                                                                                                                             │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p cilium-337407                                                                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ pause   │ -p pause-013921 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p pause-013921                                                                                                                                                                                                                               │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-963871   │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ delete  │ -p force-systemd-env-969063                                                                                                                                                                                                                   │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ cert-options-209492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ -p cert-options-209492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p old-k8s-version-304880 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:14 UTC │
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                                                                                               │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:47.474228  890986 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:47.474389  890986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:47.474401  890986 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:47.474421  890986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:47.475267  890986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:13:47.475745  890986 out.go:368] Setting JSON to false
	I1026 15:13:47.476666  890986 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17780,"bootTime":1761473848,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:13:47.476795  890986 start.go:141] virtualization:  
	I1026 15:13:47.479941  890986 out.go:179] * [old-k8s-version-304880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:13:47.483725  890986 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:47.483886  890986 notify.go:220] Checking for updates...
	I1026 15:13:47.488442  890986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:47.491382  890986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:47.494326  890986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:13:47.497321  890986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:13:47.500292  890986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:47.503734  890986 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:47.507352  890986 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 15:13:47.510386  890986 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:47.538048  890986 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:13:47.538169  890986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:47.601895  890986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:13:47.592117497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:13:47.602014  890986 docker.go:318] overlay module found
	I1026 15:13:47.605043  890986 out.go:179] * Using the docker driver based on existing profile
	I1026 15:13:47.607801  890986 start.go:305] selected driver: docker
	I1026 15:13:47.607819  890986 start.go:925] validating driver "docker" against &{Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:47.607961  890986 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:47.608754  890986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:47.663611  890986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:13:47.654171032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:13:47.664086  890986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:47.664124  890986 cni.go:84] Creating CNI manager for ""
	I1026 15:13:47.664216  890986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:47.664269  890986 start.go:349] cluster config:
	{Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:47.667411  890986 out.go:179] * Starting "old-k8s-version-304880" primary control-plane node in "old-k8s-version-304880" cluster
	I1026 15:13:47.670217  890986 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:47.673144  890986 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:47.676014  890986 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:13:47.676077  890986 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 15:13:47.676091  890986 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:47.676104  890986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:47.676180  890986 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:13:47.676193  890986 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 15:13:47.676306  890986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json ...
	I1026 15:13:47.695925  890986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:47.695949  890986 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:47.695962  890986 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:47.695984  890986 start.go:360] acquireMachinesLock for old-k8s-version-304880: {Name:mk7199322885b6a14cdd6d843ed9457416dde222 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:47.696051  890986 start.go:364] duration metric: took 36.161µs to acquireMachinesLock for "old-k8s-version-304880"
	I1026 15:13:47.696086  890986 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:13:47.696093  890986 fix.go:54] fixHost starting: 
	I1026 15:13:47.696354  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:47.713733  890986 fix.go:112] recreateIfNeeded on old-k8s-version-304880: state=Stopped err=<nil>
	W1026 15:13:47.713763  890986 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:13:47.716979  890986 out.go:252] * Restarting existing docker container for "old-k8s-version-304880" ...
	I1026 15:13:47.717081  890986 cli_runner.go:164] Run: docker start old-k8s-version-304880
	I1026 15:13:47.974016  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:47.996790  890986 kic.go:430] container "old-k8s-version-304880" state is running.
	I1026 15:13:47.997169  890986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:13:48.020894  890986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/config.json ...
	I1026 15:13:48.021138  890986 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:48.021210  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:48.043045  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:48.043606  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:48.043620  890986 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:48.044307  890986 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:13:51.196730  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-304880
	
	I1026 15:13:51.196762  890986 ubuntu.go:182] provisioning hostname "old-k8s-version-304880"
	I1026 15:13:51.196842  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:51.214370  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:51.214683  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:51.214698  890986 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-304880 && echo "old-k8s-version-304880" | sudo tee /etc/hostname
	I1026 15:13:51.381841  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-304880
	
	I1026 15:13:51.381938  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:51.400047  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:51.400365  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:51.400388  890986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-304880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-304880/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-304880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:51.557597  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:51.557632  890986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:13:51.557666  890986 ubuntu.go:190] setting up certificates
	I1026 15:13:51.557680  890986 provision.go:84] configureAuth start
	I1026 15:13:51.557752  890986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:13:51.575498  890986 provision.go:143] copyHostCerts
	I1026 15:13:51.575570  890986 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:13:51.575591  890986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:13:51.575671  890986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:13:51.575775  890986 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:13:51.575787  890986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:13:51.575821  890986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:13:51.575879  890986 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:13:51.575888  890986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:13:51.575912  890986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:13:51.575963  890986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-304880 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-304880]
	I1026 15:13:52.256636  890986 provision.go:177] copyRemoteCerts
	I1026 15:13:52.256721  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:52.256764  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.275202  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:52.381627  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:52.399925  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 15:13:52.426605  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:13:52.445584  890986 provision.go:87] duration metric: took 887.890224ms to configureAuth
	I1026 15:13:52.445611  890986 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:52.445816  890986 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:52.445935  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.464465  890986 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:52.464847  890986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:13:52.464866  890986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:52.785060  890986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:52.785085  890986 machine.go:96] duration metric: took 4.76393787s to provisionDockerMachine
	I1026 15:13:52.785097  890986 start.go:293] postStartSetup for "old-k8s-version-304880" (driver="docker")
	I1026 15:13:52.785134  890986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:52.785205  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:52.785263  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.804971  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:52.914319  890986 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:52.917787  890986 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:52.917857  890986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:52.917875  890986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:13:52.917939  890986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:13:52.918024  890986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:13:52.918131  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:52.925765  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:13:52.943799  890986 start.go:296] duration metric: took 158.685904ms for postStartSetup
	I1026 15:13:52.943927  890986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:52.943976  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:52.962218  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:53.070065  890986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:53.076292  890986 fix.go:56] duration metric: took 5.380192252s for fixHost
	I1026 15:13:53.076318  890986 start.go:83] releasing machines lock for "old-k8s-version-304880", held for 5.380252404s
	I1026 15:13:53.076400  890986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-304880
	I1026 15:13:53.095248  890986 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:53.095317  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:53.095598  890986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:53.095676  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:53.122781  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:53.127927  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:53.224589  890986 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:53.325950  890986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:53.363361  890986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:53.369129  890986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:53.369227  890986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:53.379244  890986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:53.379270  890986 start.go:495] detecting cgroup driver to use...
	I1026 15:13:53.379334  890986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:13:53.379420  890986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:53.394909  890986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:53.409537  890986 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:53.409602  890986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:53.426144  890986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:53.440347  890986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:53.568275  890986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:53.682109  890986 docker.go:234] disabling docker service ...
	I1026 15:13:53.682193  890986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:53.698071  890986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:53.712827  890986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:53.832775  890986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:53.949752  890986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:53.967582  890986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:53.982923  890986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 15:13:53.983012  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:53.994636  890986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:13:53.994728  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.006726  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.018623  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.028547  890986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:54.037612  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.047525  890986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.056845  890986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:54.066618  890986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:54.074788  890986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:54.082761  890986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:54.207289  890986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:54.338046  890986 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:54.338116  890986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:54.342105  890986 start.go:563] Will wait 60s for crictl version
	I1026 15:13:54.342167  890986 ssh_runner.go:195] Run: which crictl
	I1026 15:13:54.345705  890986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:54.384155  890986 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:54.384320  890986 ssh_runner.go:195] Run: crio --version
	I1026 15:13:54.414262  890986 ssh_runner.go:195] Run: crio --version
	I1026 15:13:54.450305  890986 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 15:13:54.453203  890986 cli_runner.go:164] Run: docker network inspect old-k8s-version-304880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:54.468823  890986 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:54.472816  890986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:54.483328  890986 kubeadm.go:883] updating cluster {Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:54.483448  890986 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:13:54.483515  890986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:54.521080  890986 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:54.521108  890986 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:54.521166  890986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:54.550921  890986 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:54.550943  890986 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:54.550951  890986 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1026 15:13:54.551062  890986 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-304880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:54.551146  890986 ssh_runner.go:195] Run: crio config
	I1026 15:13:54.615360  890986 cni.go:84] Creating CNI manager for ""
	I1026 15:13:54.615386  890986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:54.615409  890986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:54.615432  890986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-304880 NodeName:old-k8s-version-304880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:54.615569  890986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-304880"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:54.615639  890986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 15:13:54.623741  890986 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:54.623841  890986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:54.631588  890986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 15:13:54.644526  890986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:54.658001  890986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1026 15:13:54.671950  890986 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:54.675974  890986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:54.685805  890986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:54.813185  890986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:54.831902  890986 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880 for IP: 192.168.76.2
	I1026 15:13:54.831924  890986 certs.go:195] generating shared ca certs ...
	I1026 15:13:54.831941  890986 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:54.832083  890986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:13:54.832136  890986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:13:54.832156  890986 certs.go:257] generating profile certs ...
	I1026 15:13:54.832253  890986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.key
	I1026 15:13:54.832322  890986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key.2229c60e
	I1026 15:13:54.832365  890986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key
	I1026 15:13:54.832495  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:13:54.832533  890986 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:54.832548  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:54.832585  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:54.832610  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:54.832633  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:54.832687  890986 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:13:54.833503  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:54.860943  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:13:54.885327  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:54.907184  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:13:54.932224  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:13:54.957998  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:13:54.987735  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:55.008336  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:55.040780  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:13:55.066014  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:13:55.085580  890986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:55.105321  890986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:55.119350  890986 ssh_runner.go:195] Run: openssl version
	I1026 15:13:55.125820  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:13:55.135077  890986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:13:55.138991  890986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:13:55.139113  890986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:13:55.186442  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:55.194627  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:55.203809  890986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.207540  890986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.207604  890986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.248334  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:55.256485  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:13:55.264983  890986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:13:55.269265  890986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:13:55.269378  890986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:13:55.310617  890986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:55.318431  890986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:55.322035  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:55.362971  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:55.405826  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:55.446659  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:55.493264  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:55.546719  890986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:55.602320  890986 kubeadm.go:400] StartCluster: {Name:old-k8s-version-304880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-304880 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:55.602461  890986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:55.602565  890986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:55.703282  890986 cri.go:89] found id: "7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf"
	I1026 15:13:55.703357  890986 cri.go:89] found id: "8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d"
	I1026 15:13:55.703394  890986 cri.go:89] found id: "bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be"
	I1026 15:13:55.703425  890986 cri.go:89] found id: ""
	I1026 15:13:55.703510  890986 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:55.731077  890986 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:55Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:55.731224  890986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:55.749773  890986 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:55.749845  890986 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:55.749937  890986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:55.762780  890986 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:55.763464  890986 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-304880" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:55.763784  890986 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-304880" cluster setting kubeconfig missing "old-k8s-version-304880" context setting]
	I1026 15:13:55.764354  890986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.766152  890986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:55.782644  890986 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:13:55.782718  890986 kubeadm.go:601] duration metric: took 32.853208ms to restartPrimaryControlPlane
	I1026 15:13:55.782748  890986 kubeadm.go:402] duration metric: took 180.434142ms to StartCluster
	I1026 15:13:55.782793  890986 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.782876  890986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:13:55.783808  890986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.784081  890986 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:55.784490  890986 config.go:182] Loaded profile config "old-k8s-version-304880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:13:55.784463  890986 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:55.784556  890986 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-304880"
	I1026 15:13:55.784582  890986 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-304880"
	W1026 15:13:55.784596  890986 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:55.784619  890986 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:55.784844  890986 addons.go:69] Setting dashboard=true in profile "old-k8s-version-304880"
	I1026 15:13:55.784873  890986 addons.go:238] Setting addon dashboard=true in "old-k8s-version-304880"
	W1026 15:13:55.784940  890986 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:55.784974  890986 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:55.785137  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.785604  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.786137  890986 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-304880"
	I1026 15:13:55.786163  890986 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-304880"
	I1026 15:13:55.786450  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.789924  890986 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:55.795311  890986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:55.830866  890986 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:13:55.840818  890986 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:55.845706  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:55.845735  890986 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:55.845821  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:55.855843  890986 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-304880"
	W1026 15:13:55.855875  890986 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:55.855900  890986 host.go:66] Checking if "old-k8s-version-304880" exists ...
	I1026 15:13:55.856305  890986 cli_runner.go:164] Run: docker container inspect old-k8s-version-304880 --format={{.State.Status}}
	I1026 15:13:55.859706  890986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:55.864532  890986 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:55.864555  890986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:55.864633  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:55.921081  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:55.923654  890986 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:55.923674  890986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:55.923736  890986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-304880
	I1026 15:13:55.939114  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:55.958452  890986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/old-k8s-version-304880/id_rsa Username:docker}
	I1026 15:13:56.153576  890986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:56.169756  890986 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-304880" to be "Ready" ...
	I1026 15:13:56.251239  890986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:56.276050  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:56.276075  890986 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:56.340470  890986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:56.342743  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:56.342769  890986 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:56.400637  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:56.400667  890986 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:56.490266  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:56.490290  890986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:56.546850  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:56.546877  890986 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:56.610697  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:56.610724  890986 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:56.634623  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:56.634650  890986 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:56.659701  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:56.659729  890986 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:56.683484  890986 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:56.683511  890986 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:56.706305  890986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:14:00.945329  890986 node_ready.go:49] node "old-k8s-version-304880" is "Ready"
	I1026 15:14:00.945361  890986 node_ready.go:38] duration metric: took 4.775564981s for node "old-k8s-version-304880" to be "Ready" ...
	I1026 15:14:00.945375  890986 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:14:00.945437  890986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:14:02.683511  890986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.432235997s)
	I1026 15:14:02.683574  890986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.343082294s)
	I1026 15:14:03.260605  890986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.554224354s)
	I1026 15:14:03.260952  890986 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.315488527s)
	I1026 15:14:03.260986  890986 api_server.go:72] duration metric: took 7.476854114s to wait for apiserver process to appear ...
	I1026 15:14:03.260993  890986 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:14:03.261010  890986 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:14:03.264154  890986 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-304880 addons enable metrics-server
	
	I1026 15:14:03.267174  890986 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1026 15:14:03.271097  890986 addons.go:514] duration metric: took 7.48662686s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 15:14:03.272968  890986 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:14:03.274875  890986 api_server.go:141] control plane version: v1.28.0
	I1026 15:14:03.274903  890986 api_server.go:131] duration metric: took 13.901711ms to wait for apiserver health ...
	I1026 15:14:03.274914  890986 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:14:03.279121  890986 system_pods.go:59] 8 kube-system pods found
	I1026 15:14:03.279191  890986 system_pods.go:61] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:03.279210  890986 system_pods.go:61] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:03.279218  890986 system_pods.go:61] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:14:03.279231  890986 system_pods.go:61] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:14:03.279238  890986 system_pods.go:61] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:14:03.279248  890986 system_pods.go:61] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:14:03.279271  890986 system_pods.go:61] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:14:03.279283  890986 system_pods.go:61] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:14:03.279290  890986 system_pods.go:74] duration metric: took 4.370502ms to wait for pod list to return data ...
	I1026 15:14:03.279304  890986 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:14:03.286352  890986 default_sa.go:45] found service account: "default"
	I1026 15:14:03.286389  890986 default_sa.go:55] duration metric: took 7.077117ms for default service account to be created ...
	I1026 15:14:03.286400  890986 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:14:03.291601  890986 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:03.291635  890986 system_pods.go:89] "coredns-5dd5756b68-fdtlk" [d765ae9d-1a98-44a0-adef-fdca5334d7de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:03.291647  890986 system_pods.go:89] "etcd-old-k8s-version-304880" [05802004-4ef9-40eb-a7f5-2c69cabd1ff6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:03.291653  890986 system_pods.go:89] "kindnet-kwb2h" [0973c672-640b-42ab-842c-61cacaf8d96e] Running
	I1026 15:14:03.291694  890986 system_pods.go:89] "kube-apiserver-old-k8s-version-304880" [2c34a7c9-29b0-464f-989f-3a1a3260a085] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:14:03.291702  890986 system_pods.go:89] "kube-controller-manager-old-k8s-version-304880" [92718821-2bc8-4c7a-9223-605bbcec4ab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:14:03.291713  890986 system_pods.go:89] "kube-proxy-rsdnc" [5feeb2b9-1888-4036-9214-e75dc8a9bef9] Running
	I1026 15:14:03.291720  890986 system_pods.go:89] "kube-scheduler-old-k8s-version-304880" [3dc676ff-fa64-45d2-9686-570ac77cfc66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:14:03.291728  890986 system_pods.go:89] "storage-provisioner" [01c26bc9-c6c9-4eed-a838-d364398a7062] Running
	I1026 15:14:03.291754  890986 system_pods.go:126] duration metric: took 5.348146ms to wait for k8s-apps to be running ...
	I1026 15:14:03.291770  890986 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:14:03.291853  890986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:03.306822  890986 system_svc.go:56] duration metric: took 15.036484ms WaitForService to wait for kubelet
	I1026 15:14:03.306863  890986 kubeadm.go:586] duration metric: took 7.52272895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:03.306883  890986 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:14:03.310070  890986 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:14:03.310102  890986 node_conditions.go:123] node cpu capacity is 2
	I1026 15:14:03.310114  890986 node_conditions.go:105] duration metric: took 3.225677ms to run NodePressure ...
	I1026 15:14:03.310150  890986 start.go:241] waiting for startup goroutines ...
	I1026 15:14:03.310163  890986 start.go:246] waiting for cluster config update ...
	I1026 15:14:03.310173  890986 start.go:255] writing updated cluster config ...
	I1026 15:14:03.310452  890986 ssh_runner.go:195] Run: rm -f paused
	I1026 15:14:03.315013  890986 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:03.320155  890986 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fdtlk" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:14:05.326632  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:07.827200  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:10.325563  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:12.326319  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:14.855676  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:17.326724  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:19.327583  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:21.327856  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:23.328287  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:25.825563  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:27.826545  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:29.826955  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	W1026 15:14:32.326660  890986 pod_ready.go:104] pod "coredns-5dd5756b68-fdtlk" is not "Ready", error: <nil>
	I1026 15:14:33.327137  890986 pod_ready.go:94] pod "coredns-5dd5756b68-fdtlk" is "Ready"
	I1026 15:14:33.327168  890986 pod_ready.go:86] duration metric: took 30.006983516s for pod "coredns-5dd5756b68-fdtlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.330177  890986 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.335496  890986 pod_ready.go:94] pod "etcd-old-k8s-version-304880" is "Ready"
	I1026 15:14:33.335524  890986 pod_ready.go:86] duration metric: took 5.316137ms for pod "etcd-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.339184  890986 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.344631  890986 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-304880" is "Ready"
	I1026 15:14:33.344661  890986 pod_ready.go:86] duration metric: took 5.451277ms for pod "kube-apiserver-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.347910  890986 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.524891  890986 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-304880" is "Ready"
	I1026 15:14:33.524921  890986 pod_ready.go:86] duration metric: took 176.982012ms for pod "kube-controller-manager-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:33.724953  890986 pod_ready.go:83] waiting for pod "kube-proxy-rsdnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.124955  890986 pod_ready.go:94] pod "kube-proxy-rsdnc" is "Ready"
	I1026 15:14:34.124984  890986 pod_ready.go:86] duration metric: took 400.003077ms for pod "kube-proxy-rsdnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.324650  890986 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.724771  890986 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-304880" is "Ready"
	I1026 15:14:34.724804  890986 pod_ready.go:86] duration metric: took 400.127468ms for pod "kube-scheduler-old-k8s-version-304880" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.724819  890986 pod_ready.go:40] duration metric: took 31.409741933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:34.789450  890986 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1026 15:14:34.792687  890986 out.go:203] 
	W1026 15:14:34.795785  890986 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:14:34.798717  890986 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:14:34.801795  890986 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-304880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.530677844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.538418862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.53896711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.557165664Z" level=info msg="Created container cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq/dashboard-metrics-scraper" id=ab27a799-7658-40a1-bff7-74c1772457df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.558237167Z" level=info msg="Starting container: cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7" id=99a5a3fd-a5f4-4058-a4fc-e6663105826c name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:34 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:34.560495802Z" level=info msg="Started container" PID=1632 containerID=cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq/dashboard-metrics-scraper id=99a5a3fd-a5f4-4058-a4fc-e6663105826c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a
	Oct 26 15:14:34 old-k8s-version-304880 conmon[1630]: conmon cd6350fc96d6707c4f20 <ninfo>: container 1632 exited with status 1
	Oct 26 15:14:35 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:35.243000608Z" level=info msg="Removing container: f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e" id=f5a99802-ffc2-427f-836b-1c839509aac0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:35 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:35.260336325Z" level=info msg="Error loading conmon cgroup of container f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e: cgroup deleted" id=f5a99802-ffc2-427f-836b-1c839509aac0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:35 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:35.264032884Z" level=info msg="Removed container f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq/dashboard-metrics-scraper" id=f5a99802-ffc2-427f-836b-1c839509aac0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.944664767Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.952878939Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.95291491Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.952938853Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.956346719Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.956378161Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.95640043Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.960229511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.960267879Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.96029178Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.963970525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.964015088Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.964038801Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.967155061Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:41 old-k8s-version-304880 crio[648]: time="2025-10-26T15:14:41.967323875Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	cd6350fc96d67       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   d243eadf443ed       dashboard-metrics-scraper-5f989dc9cf-g4bbq       kubernetes-dashboard
	21b2e5379ae9d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   9782359542c60       storage-provisioner                              kube-system
	0262c0af4a845       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago      Running             kubernetes-dashboard        0                   2dba0ffac99a5       kubernetes-dashboard-8694d4445c-t54nl            kubernetes-dashboard
	9f4870ebe7fda       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   f93fc5d9a248e       coredns-5dd5756b68-fdtlk                         kube-system
	59cfc8e2ced06       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   0d10b794441ec       busybox                                          default
	d5f4f97f50786       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   dea6d0b521dcd       kindnet-kwb2h                                    kube-system
	712f20d7bb2d3       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   8cd1d8a1ab4fb       kube-proxy-rsdnc                                 kube-system
	484bea0c25b53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   9782359542c60       storage-provisioner                              kube-system
	7fb91d6b4b519       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   98c7b68c41ea7       kube-apiserver-old-k8s-version-304880            kube-system
	8a82a194df0d6       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   ed700be9fdbab       etcd-old-k8s-version-304880                      kube-system
	940c72d34c2c1       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   43ce8ed7b69d8       kube-scheduler-old-k8s-version-304880            kube-system
	bc5d06093202e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   c9d4c18719f24       kube-controller-manager-old-k8s-version-304880   kube-system
	
	
	==> coredns [9f4870ebe7fda1cfed09a3942ae73022c0b81fb1a481240641c3e32e44de7666] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32939 - 47043 "HINFO IN 8841256784006009513.7448725774595104335. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015605622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-304880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-304880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=old-k8s-version-304880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_12_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:12:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-304880
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:12:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:14:31 +0000   Sun, 26 Oct 2025 15:13:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-304880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d0d7db31-34b9-4b69-bff7-8420a1723dd8
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-fdtlk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-304880                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-kwb2h                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-304880             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-304880    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-rsdnc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-304880             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-g4bbq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-t54nl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-304880 event: Registered Node old-k8s-version-304880 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-304880 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-304880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-304880 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-304880 event: Registered Node old-k8s-version-304880 in Controller
	
	
	==> dmesg <==
	[Oct26 14:47] overlayfs: idmapped layers are currently not supported
	[Oct26 14:52] overlayfs: idmapped layers are currently not supported
	[Oct26 14:53] overlayfs: idmapped layers are currently not supported
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8a82a194df0d638b9f23111e164c8efa1a7d89f05553222a8420fa495bea507d] <==
	{"level":"info","ts":"2025-10-26T15:13:56.249451Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:13:56.249479Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:13:56.249556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T15:13:56.249563Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-26T15:13:56.249981Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:13:56.250011Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:13:56.250025Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:13:56.252077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-26T15:13:56.252158Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-26T15:13:56.27301Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:13:56.273086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:13:57.124732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T15:13:57.124847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:13:57.124891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-26T15:13:57.124929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.124959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.124998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.125031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-26T15:13:57.136955Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-304880 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:13:57.137227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:13:57.138227Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-26T15:13:57.138639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:13:57.139574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T15:13:57.176752Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:13:57.176861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 15:14:52 up  4:57,  0 user,  load average: 2.10, 3.34, 2.89
	Linux old-k8s-version-304880 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5f4f97f50786460aae350051a6ee4871267ad06cf23f9c831680891272c419d] <==
	I1026 15:14:01.737493       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:14:01.737762       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:14:01.737909       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:14:01.737927       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:14:01.737939       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:14:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:14:01.941101       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:14:01.944609       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:14:01.944844       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:14:01.945036       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:14:31.942135       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:14:31.951839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:14:31.951991       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:14:31.952920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:14:33.545953       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:14:33.545989       1 metrics.go:72] Registering metrics
	I1026 15:14:33.546054       1 controller.go:711] "Syncing nftables rules"
	I1026 15:14:41.944310       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:41.944364       1 main.go:301] handling current node
	I1026 15:14:51.944774       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:51.944813       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7fb91d6b4b51979cd44655e86f8ac1481868a681f2c89b3097d7dcef9e924cbf] <==
	I1026 15:14:00.960456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:14:01.002047       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:14:01.061742       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 15:14:01.061932       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 15:14:01.061977       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 15:14:01.062924       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 15:14:01.069085       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 15:14:01.069210       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 15:14:01.069412       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 15:14:01.070412       1 aggregator.go:166] initial CRD sync complete...
	I1026 15:14:01.070473       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 15:14:01.070501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:14:01.070529       1 cache.go:39] Caches are synced for autoregister controller
	E1026 15:14:01.201470       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:14:01.685247       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:14:03.075378       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 15:14:03.125738       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 15:14:03.153649       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:14:03.167243       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:14:03.177103       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 15:14:03.234897       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.154.238"}
	I1026 15:14:03.253367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.165"}
	I1026 15:14:13.945455       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:14:14.043463       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 15:14:14.145792       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bc5d06093202e06c79f31729f6e2f66dda9f8e41671d0c128c0a94a561e476be] <==
	I1026 15:14:14.154666       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1026 15:14:14.203673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="429.856651ms"
	I1026 15:14:14.203869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.862µs"
	I1026 15:14:14.206534       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-t54nl"
	I1026 15:14:14.211556       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	I1026 15:14:14.221753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.34746ms"
	I1026 15:14:14.237029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.932918ms"
	I1026 15:14:14.237302       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:14:14.259379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.300015ms"
	I1026 15:14:14.260127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.83µs"
	I1026 15:14:14.265377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.568938ms"
	I1026 15:14:14.274747       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:14:14.274779       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 15:14:14.280111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.991µs"
	I1026 15:14:14.295858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.421681ms"
	I1026 15:14:14.295955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.553µs"
	I1026 15:14:20.226547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.427643ms"
	I1026 15:14:20.226640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.319µs"
	I1026 15:14:24.225732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.129µs"
	I1026 15:14:25.223931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="130.504µs"
	I1026 15:14:26.225719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.867µs"
	I1026 15:14:33.221368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.341351ms"
	I1026 15:14:33.221748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.5µs"
	I1026 15:14:35.276025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.533µs"
	I1026 15:14:44.542006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.02µs"
	
	
	==> kube-proxy [712f20d7bb2d38f8edba961e0c44bda92a7a3f6c0da47f9d03c382368a373990] <==
	I1026 15:14:01.966706       1 server_others.go:69] "Using iptables proxy"
	I1026 15:14:02.004951       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1026 15:14:02.158803       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:14:02.172915       1 server_others.go:152] "Using iptables Proxier"
	I1026 15:14:02.172959       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 15:14:02.172968       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 15:14:02.173006       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 15:14:02.173268       1 server.go:846] "Version info" version="v1.28.0"
	I1026 15:14:02.173285       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:02.178949       1 config.go:188] "Starting service config controller"
	I1026 15:14:02.178975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 15:14:02.178994       1 config.go:97] "Starting endpoint slice config controller"
	I1026 15:14:02.178998       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 15:14:02.179442       1 config.go:315] "Starting node config controller"
	I1026 15:14:02.179449       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 15:14:02.280178       1 shared_informer.go:318] Caches are synced for node config
	I1026 15:14:02.281716       1 shared_informer.go:318] Caches are synced for service config
	I1026 15:14:02.281780       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [940c72d34c2c196e0a7e52a95d277e21da8b2e50a64301dc1c33710098582c12] <==
	I1026 15:13:59.420849       1 serving.go:348] Generated self-signed cert in-memory
	I1026 15:14:01.752502       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 15:14:01.752554       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:01.771933       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 15:14:01.772673       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 15:14:01.772609       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:14:01.774800       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:14:01.772643       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:14:01.775265       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 15:14:01.780767       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 15:14:01.780809       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 15:14:01.873193       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 15:14:01.875244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:14:01.877043       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.226025     770 topology_manager.go:215] "Topology Admit Handler" podUID="83f76863-0199-4177-997c-97bcca0ded43" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336266     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/83f76863-0199-4177-997c-97bcca0ded43-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-g4bbq\" (UID: \"83f76863-0199-4177-997c-97bcca0ded43\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336551     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/835824df-847d-402e-b2b4-fa53792bffa6-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-t54nl\" (UID: \"835824df-847d-402e-b2b4-fa53792bffa6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t54nl"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336609     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4spw\" (UniqueName: \"kubernetes.io/projected/83f76863-0199-4177-997c-97bcca0ded43-kube-api-access-c4spw\") pod \"dashboard-metrics-scraper-5f989dc9cf-g4bbq\" (UID: \"83f76863-0199-4177-997c-97bcca0ded43\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: I1026 15:14:14.336640     770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jlz\" (UniqueName: \"kubernetes.io/projected/835824df-847d-402e-b2b4-fa53792bffa6-kube-api-access-79jlz\") pod \"kubernetes-dashboard-8694d4445c-t54nl\" (UID: \"835824df-847d-402e-b2b4-fa53792bffa6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t54nl"
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: W1026 15:14:14.552941     770 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-2dba0ffac99a53d6145e17d229a29a453cbdddd0a229581826846ffb31c6f17a WatchSource:0}: Error finding container 2dba0ffac99a53d6145e17d229a29a453cbdddd0a229581826846ffb31c6f17a: Status 404 returned error can't find the container with id 2dba0ffac99a53d6145e17d229a29a453cbdddd0a229581826846ffb31c6f17a
	Oct 26 15:14:14 old-k8s-version-304880 kubelet[770]: W1026 15:14:14.569456     770 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/47abca8f012a00868730309448f813a8d3923fe64a6d547150f7eca61ac50f8e/crio-d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a WatchSource:0}: Error finding container d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a: Status 404 returned error can't find the container with id d243eadf443ede7419a2e541bc13f960ff184839456cb802ce2a192316760c7a
	Oct 26 15:14:24 old-k8s-version-304880 kubelet[770]: I1026 15:14:24.202921     770 scope.go:117] "RemoveContainer" containerID="e8f7f0d668c5311fc7fd8b2e37a8397665bbd077a10d829c5328d9c9dfd54975"
	Oct 26 15:14:24 old-k8s-version-304880 kubelet[770]: I1026 15:14:24.222805     770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t54nl" podStartSLOduration=5.307008215 podCreationTimestamp="2025-10-26 15:14:14 +0000 UTC" firstStartedPulling="2025-10-26 15:14:14.558632834 +0000 UTC m=+19.719285660" lastFinishedPulling="2025-10-26 15:14:19.474365877 +0000 UTC m=+24.635018802" observedRunningTime="2025-10-26 15:14:20.211000887 +0000 UTC m=+25.371653713" watchObservedRunningTime="2025-10-26 15:14:24.222741357 +0000 UTC m=+29.383394175"
	Oct 26 15:14:25 old-k8s-version-304880 kubelet[770]: I1026 15:14:25.206420     770 scope.go:117] "RemoveContainer" containerID="e8f7f0d668c5311fc7fd8b2e37a8397665bbd077a10d829c5328d9c9dfd54975"
	Oct 26 15:14:25 old-k8s-version-304880 kubelet[770]: I1026 15:14:25.206730     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:25 old-k8s-version-304880 kubelet[770]: E1026 15:14:25.207005     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:26 old-k8s-version-304880 kubelet[770]: I1026 15:14:26.210738     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:26 old-k8s-version-304880 kubelet[770]: E1026 15:14:26.211028     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:32 old-k8s-version-304880 kubelet[770]: I1026 15:14:32.226711     770 scope.go:117] "RemoveContainer" containerID="484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047"
	Oct 26 15:14:34 old-k8s-version-304880 kubelet[770]: I1026 15:14:34.527944     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:35 old-k8s-version-304880 kubelet[770]: I1026 15:14:35.240508     770 scope.go:117] "RemoveContainer" containerID="f103ce394057860d49a084fbe166b9d1e64bdf1cb68c37ae8d39996887a5a06e"
	Oct 26 15:14:35 old-k8s-version-304880 kubelet[770]: I1026 15:14:35.241140     770 scope.go:117] "RemoveContainer" containerID="cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	Oct 26 15:14:35 old-k8s-version-304880 kubelet[770]: E1026 15:14:35.241423     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:44 old-k8s-version-304880 kubelet[770]: I1026 15:14:44.528355     770 scope.go:117] "RemoveContainer" containerID="cd6350fc96d6707c4f20003c08cc9d90fed9eb4a1e1c42e3eec30e22abc7edc7"
	Oct 26 15:14:44 old-k8s-version-304880 kubelet[770]: E1026 15:14:44.529174     770 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g4bbq_kubernetes-dashboard(83f76863-0199-4177-997c-97bcca0ded43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g4bbq" podUID="83f76863-0199-4177-997c-97bcca0ded43"
	Oct 26 15:14:47 old-k8s-version-304880 kubelet[770]: I1026 15:14:47.101772     770 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 15:14:47 old-k8s-version-304880 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:14:47 old-k8s-version-304880 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:14:47 old-k8s-version-304880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0262c0af4a8456676c4e3a7de2c2ae2379faa24ef1df396371303d7adacd1785] <==
	2025/10/26 15:14:19 Using namespace: kubernetes-dashboard
	2025/10/26 15:14:19 Using in-cluster config to connect to apiserver
	2025/10/26 15:14:19 Using secret token for csrf signing
	2025/10/26 15:14:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:14:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:14:19 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 15:14:19 Generating JWE encryption key
	2025/10/26 15:14:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:14:20 Initializing JWE encryption key from synchronized object
	2025/10/26 15:14:20 Creating in-cluster Sidecar client
	2025/10/26 15:14:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:14:20 Serving insecurely on HTTP port: 9090
	2025/10/26 15:14:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:14:19 Starting overwatch
	
	
	==> storage-provisioner [21b2e5379ae9d30caf86aad0ff02e62fe2339f039ce9266f17232ea235ddec07] <==
	I1026 15:14:32.290443       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:14:32.305975       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:14:32.306020       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 15:14:49.707183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:14:49.707350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-304880_b7e4c3de-b699-43cb-a187-4da698cde2fa!
	I1026 15:14:49.708026       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a57b129-0642-4616-9dc6-f67d3e08867c", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-304880_b7e4c3de-b699-43cb-a187-4da698cde2fa became leader
	I1026 15:14:49.807776       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-304880_b7e4c3de-b699-43cb-a187-4da698cde2fa!
	
	
	==> storage-provisioner [484bea0c25b53f5bb644b6ed51950eb140780d0cd48c0cf3bf6f7799dbb08047] <==
	I1026 15:14:01.902484       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:14:31.904960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-304880 -n old-k8s-version-304880
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-304880 -n old-k8s-version-304880: exit status 2 (448.905861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-304880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.47218ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:16:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-018497 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-018497 describe deploy/metrics-server -n kube-system: exit status 1 (81.308733ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-018497 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-018497
helpers_test.go:243: (dbg) docker inspect embed-certs-018497:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad",
	        "Created": "2025-10-26T15:15:02.876896856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 895490,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:15:02.967969206Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/hosts",
	        "LogPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad-json.log",
	        "Name": "/embed-certs-018497",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-018497:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-018497",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad",
	                "LowerDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-018497",
	                "Source": "/var/lib/docker/volumes/embed-certs-018497/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-018497",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-018497",
	                "name.minikube.sigs.k8s.io": "embed-certs-018497",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f507d741e19c87c5e23e21551a26fc392707874c0a4fb09e87c6ef5ac8ca35c2",
	            "SandboxKey": "/var/run/docker/netns/f507d741e19c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-018497": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:6b:0e:a3:fb:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d6626fff9fc6f2eadb00ab3ddc73eb8fae0b42c47b2901a5327d56ab6e3bb96",
	                    "EndpointID": "b1d4b7792f6a4fc5dabda3cdedee52059604a8798ebb06a85afa53111e67c96b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-018497",
	                        "bf916fec8d46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-018497 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-018497 logs -n 25: (1.18134913s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-337407 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p cilium-337407 sudo crio config                                                                                                                                                                                                             │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p cilium-337407                                                                                                                                                                                                                              │ cilium-337407            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ pause   │ -p pause-013921 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ delete  │ -p pause-013921                                                                                                                                                                                                                               │ pause-013921             │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-963871   │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ delete  │ -p force-systemd-env-969063                                                                                                                                                                                                                   │ force-systemd-env-969063 │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ cert-options-209492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ -p cert-options-209492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492      │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p old-k8s-version-304880 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:14 UTC │
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                                                                                               │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-963871   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880   │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-018497       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:14:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:14:56.638858  894796 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:56.639001  894796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:56.639014  894796 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:56.639045  894796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:56.639311  894796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:14:56.639757  894796 out.go:368] Setting JSON to false
	I1026 15:14:56.640959  894796 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17849,"bootTime":1761473848,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:14:56.648257  894796 start.go:141] virtualization:  
	I1026 15:14:56.652103  894796 out.go:179] * [embed-certs-018497] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:14:56.656339  894796 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:14:56.656407  894796 notify.go:220] Checking for updates...
	I1026 15:14:56.662610  894796 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:14:56.665608  894796 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:14:56.668643  894796 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:14:56.671635  894796 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:14:56.674554  894796 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:14:56.678066  894796 config.go:182] Loaded profile config "cert-expiration-963871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:56.678178  894796 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:14:56.713539  894796 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:14:56.713753  894796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:56.774091  894796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:14:56.764224698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:14:56.774206  894796 docker.go:318] overlay module found
	I1026 15:14:56.777444  894796 out.go:179] * Using the docker driver based on user configuration
	I1026 15:14:56.780413  894796 start.go:305] selected driver: docker
	I1026 15:14:56.780434  894796 start.go:925] validating driver "docker" against <nil>
	I1026 15:14:56.780452  894796 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:14:56.781362  894796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:56.847415  894796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:14:56.838517669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:14:56.847577  894796 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:14:56.847803  894796 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:56.850688  894796 out.go:179] * Using Docker driver with root privileges
	I1026 15:14:56.853788  894796 cni.go:84] Creating CNI manager for ""
	I1026 15:14:56.853861  894796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:14:56.853874  894796 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:14:56.853963  894796 start.go:349] cluster config:
	{Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:56.857101  894796 out.go:179] * Starting "embed-certs-018497" primary control-plane node in "embed-certs-018497" cluster
	I1026 15:14:56.859913  894796 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:14:56.862786  894796 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:14:56.865686  894796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:56.865740  894796 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:14:56.865754  894796 cache.go:58] Caching tarball of preloaded images
	I1026 15:14:56.865837  894796 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:14:56.865852  894796 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:14:56.865965  894796 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/config.json ...
	I1026 15:14:56.865992  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/config.json: {Name:mkdcfcf9d559761d4a5bb3412cae18f62faa2798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:56.866156  894796 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:14:56.885349  894796 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:14:56.885371  894796 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:14:56.885388  894796 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:14:56.885414  894796 start.go:360] acquireMachinesLock for embed-certs-018497: {Name:mk0d254539122323ac765a00d762d1b718b9b0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:14:56.885517  894796 start.go:364] duration metric: took 83.119µs to acquireMachinesLock for "embed-certs-018497"
	I1026 15:14:56.885547  894796 start.go:93] Provisioning new machine with config: &{Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:56.885620  894796 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:14:53.307408  894165 out.go:252] * Updating the running docker "cert-expiration-963871" container ...
	I1026 15:14:53.307433  894165 machine.go:93] provisionDockerMachine start ...
	I1026 15:14:53.307511  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:14:53.334596  894165 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.334909  894165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1026 15:14:53.334921  894165 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:14:53.509854  894165 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-963871
	
	I1026 15:14:53.509876  894165 ubuntu.go:182] provisioning hostname "cert-expiration-963871"
	I1026 15:14:53.509939  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:14:53.527022  894165 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.527324  894165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1026 15:14:53.527333  894165 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-963871 && echo "cert-expiration-963871" | sudo tee /etc/hostname
	I1026 15:14:53.709417  894165 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-963871
	
	I1026 15:14:53.709483  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:14:53.732877  894165 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.733178  894165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1026 15:14:53.733195  894165 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-963871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-963871/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-963871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:14:53.918286  894165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:14:53.918313  894165 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:14:53.918345  894165 ubuntu.go:190] setting up certificates
	I1026 15:14:53.918353  894165 provision.go:84] configureAuth start
	I1026 15:14:53.918420  894165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-963871
	I1026 15:14:53.940864  894165 provision.go:143] copyHostCerts
	I1026 15:14:53.940918  894165 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:14:53.940927  894165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:14:53.941000  894165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:14:53.941097  894165 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:14:53.941100  894165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:14:53.941124  894165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:14:53.941177  894165 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:14:53.941180  894165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:14:53.941202  894165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:14:53.941245  894165 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-963871 san=[127.0.0.1 192.168.85.2 cert-expiration-963871 localhost minikube]
	I1026 15:14:55.287386  894165 provision.go:177] copyRemoteCerts
	I1026 15:14:55.287442  894165 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:14:55.287485  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:14:55.312773  894165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/cert-expiration-963871/id_rsa Username:docker}
	I1026 15:14:55.442418  894165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:14:55.468245  894165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 15:14:55.490035  894165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:14:55.520885  894165 provision.go:87] duration metric: took 1.602519341s to configureAuth
	I1026 15:14:55.520902  894165 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:14:55.521086  894165 config.go:182] Loaded profile config "cert-expiration-963871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:55.521194  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:14:55.558287  894165 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:55.558589  894165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1026 15:14:55.558601  894165 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:14:56.889142  894796 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:14:56.889376  894796 start.go:159] libmachine.API.Create for "embed-certs-018497" (driver="docker")
	I1026 15:14:56.889422  894796 client.go:168] LocalClient.Create starting
	I1026 15:14:56.889496  894796 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:14:56.889535  894796 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:56.889558  894796 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:56.889617  894796 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:14:56.889641  894796 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:56.889655  894796 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:56.890013  894796 cli_runner.go:164] Run: docker network inspect embed-certs-018497 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:14:56.905649  894796 cli_runner.go:211] docker network inspect embed-certs-018497 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:14:56.905738  894796 network_create.go:284] running [docker network inspect embed-certs-018497] to gather additional debugging logs...
	I1026 15:14:56.905761  894796 cli_runner.go:164] Run: docker network inspect embed-certs-018497
	W1026 15:14:56.921148  894796 cli_runner.go:211] docker network inspect embed-certs-018497 returned with exit code 1
	I1026 15:14:56.921195  894796 network_create.go:287] error running [docker network inspect embed-certs-018497]: docker network inspect embed-certs-018497: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-018497 not found
	I1026 15:14:56.921214  894796 network_create.go:289] output of [docker network inspect embed-certs-018497]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-018497 not found
	
	** /stderr **
	I1026 15:14:56.921356  894796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:56.938698  894796 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:14:56.939054  894796 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:14:56.939444  894796 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:14:56.939880  894796 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1d1d0}
	I1026 15:14:56.939903  894796 network_create.go:124] attempt to create docker network embed-certs-018497 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:14:56.939960  894796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-018497 embed-certs-018497
	I1026 15:14:56.997019  894796 network_create.go:108] docker network embed-certs-018497 192.168.76.0/24 created
	I1026 15:14:56.997049  894796 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-018497" container
	I1026 15:14:56.997124  894796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:14:57.014739  894796 cli_runner.go:164] Run: docker volume create embed-certs-018497 --label name.minikube.sigs.k8s.io=embed-certs-018497 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:14:57.032150  894796 oci.go:103] Successfully created a docker volume embed-certs-018497
	I1026 15:14:57.032255  894796 cli_runner.go:164] Run: docker run --rm --name embed-certs-018497-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-018497 --entrypoint /usr/bin/test -v embed-certs-018497:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:14:57.584687  894796 oci.go:107] Successfully prepared a docker volume embed-certs-018497
	I1026 15:14:57.584763  894796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:57.584782  894796 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:14:57.584858  894796 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-018497:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 15:15:01.045039  894165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:15:01.045052  894165 machine.go:96] duration metric: took 7.737611647s to provisionDockerMachine
	I1026 15:15:01.045062  894165 start.go:293] postStartSetup for "cert-expiration-963871" (driver="docker")
	I1026 15:15:01.045072  894165 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:15:01.045151  894165 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:15:01.045201  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:15:01.068777  894165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/cert-expiration-963871/id_rsa Username:docker}
	I1026 15:15:01.182018  894165 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:15:01.186023  894165 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:15:01.186043  894165 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:15:01.186054  894165 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:15:01.186110  894165 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:15:01.186196  894165 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:15:01.186303  894165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:15:01.197222  894165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:15:01.222029  894165 start.go:296] duration metric: took 176.951715ms for postStartSetup
	I1026 15:15:01.222126  894165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:15:01.222169  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:15:01.246180  894165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/cert-expiration-963871/id_rsa Username:docker}
	I1026 15:15:01.350560  894165 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:15:01.357631  894165 fix.go:56] duration metric: took 8.095850404s for fixHost
	I1026 15:15:01.357648  894165 start.go:83] releasing machines lock for "cert-expiration-963871", held for 8.095888304s
	I1026 15:15:01.357723  894165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-963871
	I1026 15:15:01.376103  894165 ssh_runner.go:195] Run: cat /version.json
	I1026 15:15:01.376156  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:15:01.376200  894165 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:15:01.376251  894165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-963871
	I1026 15:15:01.404918  894165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/cert-expiration-963871/id_rsa Username:docker}
	I1026 15:15:01.419750  894165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/cert-expiration-963871/id_rsa Username:docker}
	I1026 15:15:01.520662  894165 ssh_runner.go:195] Run: systemctl --version
	I1026 15:15:01.703671  894165 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:15:01.780341  894165 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:15:01.791130  894165 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:15:01.791194  894165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:15:01.806316  894165 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:15:01.806330  894165 start.go:495] detecting cgroup driver to use...
	I1026 15:15:01.806366  894165 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:15:01.806415  894165 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:15:01.823429  894165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:15:01.838430  894165 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:15:01.838496  894165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:15:01.856074  894165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:15:01.870958  894165 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:15:02.025224  894165 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:15:02.169553  894165 docker.go:234] disabling docker service ...
	I1026 15:15:02.169610  894165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:15:02.184904  894165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:15:02.198700  894165 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:15:02.384272  894165 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:15:02.590221  894165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:15:02.608026  894165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:15:02.624129  894165 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:15:02.624215  894165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.644053  894165 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:15:02.644133  894165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.656431  894165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.667612  894165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.678426  894165 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:15:02.688467  894165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.699056  894165 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.708654  894165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:02.720845  894165 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:15:02.730497  894165 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:15:02.739712  894165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:15:02.753241  894796 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-018497:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.168339437s)
	I1026 15:15:02.753287  894796 kic.go:203] duration metric: took 5.168500785s to extract preloaded images to volume ...
	W1026 15:15:02.753427  894796 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 15:15:02.753552  894796 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:15:02.857207  894796 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-018497 --name embed-certs-018497 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-018497 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-018497 --network embed-certs-018497 --ip 192.168.76.2 --volume embed-certs-018497:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:15:03.255492  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Running}}
	I1026 15:15:03.278438  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:15:03.310404  894796 cli_runner.go:164] Run: docker exec embed-certs-018497 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:15:03.360221  894796 oci.go:144] the created container "embed-certs-018497" has a running status.
	I1026 15:15:03.360260  894796 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa...
	I1026 15:15:03.959154  894796 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:15:03.994429  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:15:04.014106  894796 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:15:04.014130  894796 kic_runner.go:114] Args: [docker exec --privileged embed-certs-018497 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:15:04.081778  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:15:04.108938  894796 machine.go:93] provisionDockerMachine start ...
	I1026 15:15:04.109050  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:04.137774  894796 main.go:141] libmachine: Using SSH client type: native
	I1026 15:15:04.138132  894796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:15:04.138147  894796 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:15:04.304750  894796 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-018497
	
	I1026 15:15:04.304779  894796 ubuntu.go:182] provisioning hostname "embed-certs-018497"
	I1026 15:15:04.304845  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:04.325782  894796 main.go:141] libmachine: Using SSH client type: native
	I1026 15:15:04.326104  894796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:15:04.326123  894796 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-018497 && echo "embed-certs-018497" | sudo tee /etc/hostname
	I1026 15:15:04.491970  894796 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-018497
	
	I1026 15:15:04.492069  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:04.514206  894796 main.go:141] libmachine: Using SSH client type: native
	I1026 15:15:04.514516  894796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:15:04.514533  894796 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-018497' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-018497/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-018497' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:15:04.674750  894796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:15:04.674792  894796 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:15:04.674812  894796 ubuntu.go:190] setting up certificates
	I1026 15:15:04.674825  894796 provision.go:84] configureAuth start
	I1026 15:15:04.674912  894796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-018497
	I1026 15:15:04.705248  894796 provision.go:143] copyHostCerts
	I1026 15:15:04.705309  894796 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:15:04.705318  894796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:15:04.705399  894796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:15:04.705502  894796 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:15:04.705507  894796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:15:04.705534  894796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:15:04.705593  894796 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:15:04.705597  894796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:15:04.705624  894796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:15:04.705672  894796 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.embed-certs-018497 san=[127.0.0.1 192.168.76.2 embed-certs-018497 localhost minikube]
	I1026 15:15:04.771933  894796 provision.go:177] copyRemoteCerts
	I1026 15:15:04.772006  894796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:15:04.772055  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:04.788882  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:04.892349  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:15:04.909430  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:15:04.927178  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:15:04.944874  894796 provision.go:87] duration metric: took 270.035229ms to configureAuth
	I1026 15:15:04.944900  894796 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:15:04.945084  894796 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:15:04.945201  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:04.962217  894796 main.go:141] libmachine: Using SSH client type: native
	I1026 15:15:04.962523  894796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:15:04.962541  894796 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:15:05.282602  894796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:15:05.282667  894796 machine.go:96] duration metric: took 1.173708097s to provisionDockerMachine
	I1026 15:15:05.282689  894796 client.go:171] duration metric: took 8.393257372s to LocalClient.Create
	I1026 15:15:05.282717  894796 start.go:167] duration metric: took 8.393340695s to libmachine.API.Create "embed-certs-018497"
	I1026 15:15:05.282750  894796 start.go:293] postStartSetup for "embed-certs-018497" (driver="docker")
	I1026 15:15:05.282780  894796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:15:05.282874  894796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:15:05.282957  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:05.300000  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:05.404945  894796 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:15:05.408247  894796 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:15:05.408273  894796 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:15:05.408284  894796 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:15:05.408342  894796 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:15:05.408437  894796 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:15:05.408544  894796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:15:05.416111  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:15:05.433669  894796 start.go:296] duration metric: took 150.884766ms for postStartSetup
	I1026 15:15:05.434047  894796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-018497
	I1026 15:15:05.454760  894796 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/config.json ...
	I1026 15:15:05.455067  894796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:15:05.455108  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:05.471916  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:05.573917  894796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:15:05.578819  894796 start.go:128] duration metric: took 8.693183346s to createHost
	I1026 15:15:05.578845  894796 start.go:83] releasing machines lock for "embed-certs-018497", held for 8.693314711s
	I1026 15:15:05.578942  894796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-018497
	I1026 15:15:05.595654  894796 ssh_runner.go:195] Run: cat /version.json
	I1026 15:15:05.595679  894796 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:15:05.595707  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:05.595737  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:05.618203  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:05.622785  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:05.831600  894796 ssh_runner.go:195] Run: systemctl --version
	I1026 15:15:05.839014  894796 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:15:05.880103  894796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:15:05.884830  894796 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:15:05.884906  894796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:15:05.917259  894796 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 15:15:05.917297  894796 start.go:495] detecting cgroup driver to use...
	I1026 15:15:05.917332  894796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:15:05.917405  894796 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:15:05.935440  894796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:15:05.948875  894796 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:15:05.948953  894796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:15:05.968126  894796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:15:05.988299  894796 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:15:06.108833  894796 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:15:06.238759  894796 docker.go:234] disabling docker service ...
	I1026 15:15:06.238830  894796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:15:06.260941  894796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:15:06.276125  894796 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:15:06.390123  894796 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:15:06.504594  894796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:15:06.518004  894796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:15:06.532382  894796 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:15:06.532491  894796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.541705  894796 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:15:06.541874  894796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.551291  894796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.560282  894796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.569816  894796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:15:06.578476  894796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.588652  894796 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.603895  894796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:15:06.613394  894796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:15:06.621366  894796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:15:06.629003  894796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:15:06.740264  894796 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:15:06.876582  894796 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:15:06.876660  894796 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:15:06.881253  894796 start.go:563] Will wait 60s for crictl version
	I1026 15:15:06.881317  894796 ssh_runner.go:195] Run: which crictl
	I1026 15:15:06.884843  894796 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:15:06.910286  894796 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:15:06.910371  894796 ssh_runner.go:195] Run: crio --version
	I1026 15:15:06.940116  894796 ssh_runner.go:195] Run: crio --version
	I1026 15:15:06.975866  894796 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:15:03.040226  894165 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:15:06.978795  894796 cli_runner.go:164] Run: docker network inspect embed-certs-018497 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:15:06.999362  894796 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:15:07.003624  894796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:15:07.017264  894796 kubeadm.go:883] updating cluster {Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:15:07.017399  894796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:15:07.017471  894796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:15:07.050930  894796 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:15:07.050957  894796 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:15:07.051021  894796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:15:07.079178  894796 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:15:07.079203  894796 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:15:07.079212  894796 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:15:07.079310  894796 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-018497 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:15:07.079400  894796 ssh_runner.go:195] Run: crio config
	I1026 15:15:07.150670  894796 cni.go:84] Creating CNI manager for ""
	I1026 15:15:07.150693  894796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:15:07.150711  894796 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:15:07.150744  894796 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-018497 NodeName:embed-certs-018497 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:15:07.150882  894796 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-018497"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:15:07.150960  894796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:15:07.160381  894796 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:15:07.160462  894796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:15:07.168390  894796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:15:07.181620  894796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:15:07.194815  894796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:15:07.207969  894796 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:15:07.211631  894796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:15:07.221411  894796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:15:07.350885  894796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:15:07.368586  894796 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497 for IP: 192.168.76.2
	I1026 15:15:07.368661  894796 certs.go:195] generating shared ca certs ...
	I1026 15:15:07.368729  894796 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:07.368909  894796 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:15:07.368987  894796 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:15:07.369024  894796 certs.go:257] generating profile certs ...
	I1026 15:15:07.369108  894796 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.key
	I1026 15:15:07.369153  894796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.crt with IP's: []
	I1026 15:15:07.940005  894796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.crt ...
	I1026 15:15:07.940039  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.crt: {Name:mk4540f4e0b0835a792d1bc7a7e7fe1d83c00106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:07.940252  894796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.key ...
	I1026 15:15:07.940277  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.key: {Name:mkf74305e48d9ab4fb66b74cc50e5f8ae6af143e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:07.940370  894796 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key.ac97108c
	I1026 15:15:07.940393  894796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt.ac97108c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:15:08.476974  894796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt.ac97108c ...
	I1026 15:15:08.477011  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt.ac97108c: {Name:mk3bbc1afe75cba3bd35a6c8fd1d7fcc5233cc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:08.477321  894796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key.ac97108c ...
	I1026 15:15:08.477393  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key.ac97108c: {Name:mk00c688c62f0f6f041adf3def0886b3f50b5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:08.477700  894796 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt.ac97108c -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt
	I1026 15:15:08.477847  894796 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key.ac97108c -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key
	I1026 15:15:08.477956  894796 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key
	I1026 15:15:08.477993  894796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.crt with IP's: []
	I1026 15:15:08.941267  894796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.crt ...
	I1026 15:15:08.941303  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.crt: {Name:mk0db344884bc44c34bf0976018c67322c6c84f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:08.941493  894796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key ...
	I1026 15:15:08.941508  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key: {Name:mkca90e07e54b30d40f632b2c692fba176da9317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:08.941686  894796 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:15:08.941734  894796 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:15:08.941743  894796 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:15:08.941779  894796 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:15:08.941808  894796 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:15:08.941835  894796 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:15:08.941881  894796 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:15:08.942459  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:15:08.961327  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:15:08.979502  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:15:09.000415  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:15:09.034223  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:15:09.052827  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:15:09.071674  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:15:09.090050  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:15:09.108592  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:15:09.127257  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:15:09.145697  894796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:15:09.164077  894796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:15:09.177275  894796 ssh_runner.go:195] Run: openssl version
	I1026 15:15:09.185721  894796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:15:09.195141  894796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:15:09.198954  894796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:15:09.199019  894796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:15:09.242275  894796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:15:09.250862  894796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:15:09.260874  894796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:15:09.265486  894796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:15:09.265595  894796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:15:09.323696  894796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:15:09.333651  894796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:15:09.342906  894796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:15:09.346818  894796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:15:09.346890  894796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:15:09.388439  894796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:15:09.397090  894796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:15:09.400518  894796 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:15:09.400582  894796 kubeadm.go:400] StartCluster: {Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:15:09.400672  894796 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:15:09.400768  894796 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:15:09.426569  894796 cri.go:89] found id: ""
	I1026 15:15:09.426688  894796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:15:09.434498  894796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:15:09.442457  894796 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:15:09.442551  894796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:15:09.450419  894796 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:15:09.450438  894796 kubeadm.go:157] found existing configuration files:
	
	I1026 15:15:09.450492  894796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:15:09.458113  894796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:15:09.458191  894796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:15:09.465741  894796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:15:09.473254  894796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:15:09.473350  894796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:15:09.481583  894796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:15:09.489758  894796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:15:09.489853  894796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:15:09.497640  894796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:15:09.505522  894796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:15:09.505638  894796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:15:09.513878  894796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:15:09.552875  894796 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:15:09.552956  894796 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:15:09.581906  894796 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:15:09.581984  894796 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 15:15:09.582031  894796 kubeadm.go:318] OS: Linux
	I1026 15:15:09.582085  894796 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:15:09.582139  894796 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 15:15:09.582191  894796 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:15:09.582246  894796 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:15:09.582299  894796 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:15:09.582354  894796 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:15:09.582405  894796 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:15:09.582460  894796 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:15:09.582512  894796 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 15:15:09.650496  894796 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:15:09.650631  894796 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:15:09.650779  894796 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:15:09.665087  894796 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:15:09.671234  894796 out.go:252]   - Generating certificates and keys ...
	I1026 15:15:09.671407  894796 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:15:09.671518  894796 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:15:09.859763  894796 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:15:10.234101  894796 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:15:10.391084  894796 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:15:11.067139  894796 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:15:11.460691  894796 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:15:11.460972  894796 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-018497 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:15:11.676658  894796 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:15:11.676988  894796 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-018497 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:15:12.062060  894796 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:15:12.629608  894796 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:15:12.837890  894796 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:15:12.838163  894796 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:15:12.946721  894796 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:15:13.203994  894796 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:15:14.206326  894796 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:15:14.435045  894796 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:15:14.604131  894796 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:15:14.604917  894796 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:15:14.607736  894796 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:15:14.611414  894796 out.go:252]   - Booting up control plane ...
	I1026 15:15:14.611517  894796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:15:14.611599  894796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:15:14.611669  894796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:15:14.627138  894796 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:15:14.627258  894796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:15:14.634986  894796 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:15:14.635420  894796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:15:14.635476  894796 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:15:14.766033  894796 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:15:14.769179  894796 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:15:16.768635  894796 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001940035s
	I1026 15:15:16.772290  894796 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:15:16.772404  894796 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 15:15:16.772499  894796 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:15:16.772599  894796 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:15:20.050890  894796 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.278263093s
	I1026 15:15:21.103284  894796 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.330955267s
	I1026 15:15:22.775379  894796 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002879223s
	I1026 15:15:22.796616  894796 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:15:22.814250  894796 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:15:22.828973  894796 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:15:22.829203  894796 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-018497 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:15:22.842281  894796 kubeadm.go:318] [bootstrap-token] Using token: jnmbfk.wzi5aynfcozj4ic6
	I1026 15:15:22.845240  894796 out.go:252]   - Configuring RBAC rules ...
	I1026 15:15:22.845389  894796 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:15:22.851609  894796 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:15:22.866258  894796 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:15:22.880487  894796 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:15:22.887295  894796 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:15:22.893015  894796 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:15:23.184858  894796 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:15:23.638162  894796 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:15:24.183115  894796 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:15:24.184606  894796 kubeadm.go:318] 
	I1026 15:15:24.184737  894796 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:15:24.184753  894796 kubeadm.go:318] 
	I1026 15:15:24.184836  894796 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:15:24.184844  894796 kubeadm.go:318] 
	I1026 15:15:24.184872  894796 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:15:24.184937  894796 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:15:24.184995  894796 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:15:24.185003  894796 kubeadm.go:318] 
	I1026 15:15:24.185060  894796 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:15:24.185068  894796 kubeadm.go:318] 
	I1026 15:15:24.185124  894796 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:15:24.185133  894796 kubeadm.go:318] 
	I1026 15:15:24.185194  894796 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:15:24.185277  894796 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:15:24.185353  894796 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:15:24.185362  894796 kubeadm.go:318] 
	I1026 15:15:24.185450  894796 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:15:24.185539  894796 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:15:24.185548  894796 kubeadm.go:318] 
	I1026 15:15:24.185648  894796 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token jnmbfk.wzi5aynfcozj4ic6 \
	I1026 15:15:24.185765  894796 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:15:24.185791  894796 kubeadm.go:318] 	--control-plane 
	I1026 15:15:24.185799  894796 kubeadm.go:318] 
	I1026 15:15:24.185887  894796 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:15:24.185896  894796 kubeadm.go:318] 
	I1026 15:15:24.185982  894796 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token jnmbfk.wzi5aynfcozj4ic6 \
	I1026 15:15:24.186092  894796 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:15:24.190020  894796 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:15:24.190258  894796 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:15:24.190378  894796 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:15:24.190399  894796 cni.go:84] Creating CNI manager for ""
	I1026 15:15:24.190409  894796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:15:24.195545  894796 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:15:24.198588  894796 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:15:24.202921  894796 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:15:24.202942  894796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:15:24.217940  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:15:24.518767  894796 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:15:24.518870  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:24.518900  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-018497 minikube.k8s.io/updated_at=2025_10_26T15_15_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=embed-certs-018497 minikube.k8s.io/primary=true
	I1026 15:15:24.535123  894796 ops.go:34] apiserver oom_adj: -16
	I1026 15:15:24.653923  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:25.154788  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:25.654197  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:26.154008  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:26.654596  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:27.154372  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:27.654059  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:28.154032  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:28.654359  894796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:15:28.756806  894796 kubeadm.go:1113] duration metric: took 4.237995737s to wait for elevateKubeSystemPrivileges
	I1026 15:15:28.756840  894796 kubeadm.go:402] duration metric: took 19.356263283s to StartCluster
	I1026 15:15:28.756859  894796 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:28.756925  894796 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:15:28.758324  894796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:15:28.758558  894796 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:15:28.758649  894796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:15:28.758884  894796 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:15:28.758921  894796 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:15:28.758986  894796 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-018497"
	I1026 15:15:28.759001  894796 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-018497"
	I1026 15:15:28.759026  894796 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:15:28.759514  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:15:28.759798  894796 addons.go:69] Setting default-storageclass=true in profile "embed-certs-018497"
	I1026 15:15:28.759818  894796 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-018497"
	I1026 15:15:28.760074  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:15:28.762037  894796 out.go:179] * Verifying Kubernetes components...
	I1026 15:15:28.765078  894796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:15:28.787545  894796 addons.go:238] Setting addon default-storageclass=true in "embed-certs-018497"
	I1026 15:15:28.787596  894796 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:15:28.788032  894796 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:15:28.798834  894796 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:15:28.801683  894796 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:15:28.801706  894796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:15:28.801773  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:28.829170  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:28.839189  894796 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:15:28.839213  894796 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:15:28.839301  894796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:15:28.863634  894796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:15:29.091175  894796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:15:29.091310  894796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:15:29.092230  894796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:15:29.167799  894796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:15:29.451480  894796 node_ready.go:35] waiting up to 6m0s for node "embed-certs-018497" to be "Ready" ...
	I1026 15:15:29.451862  894796 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 15:15:29.761483  894796 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 15:15:29.764499  894796 addons.go:514] duration metric: took 1.005546166s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 15:15:29.956157  894796 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-018497" context rescaled to 1 replicas
	W1026 15:15:31.454616  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:33.454901  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:35.955054  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:38.455224  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:40.954473  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:42.954514  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:44.955217  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:46.955276  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:49.454970  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:51.954852  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:54.455228  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:56.954395  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:15:58.954486  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:16:00.954729  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:16:02.955156  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:16:05.455642  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:16:07.954477  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	W1026 15:16:09.954828  894796 node_ready.go:57] node "embed-certs-018497" has "Ready":"False" status (will retry)
	I1026 15:16:11.455822  894796 node_ready.go:49] node "embed-certs-018497" is "Ready"
	I1026 15:16:11.455849  894796 node_ready.go:38] duration metric: took 42.004291217s for node "embed-certs-018497" to be "Ready" ...
	I1026 15:16:11.455863  894796 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:16:11.455924  894796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:16:11.468764  894796 api_server.go:72] duration metric: took 42.710168219s to wait for apiserver process to appear ...
	I1026 15:16:11.468786  894796 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:16:11.468807  894796 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:16:11.478420  894796 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:16:11.479487  894796 api_server.go:141] control plane version: v1.34.1
	I1026 15:16:11.479510  894796 api_server.go:131] duration metric: took 10.716377ms to wait for apiserver health ...
	I1026 15:16:11.479520  894796 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:16:11.486318  894796 system_pods.go:59] 8 kube-system pods found
	I1026 15:16:11.486350  894796 system_pods.go:61] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:16:11.486357  894796 system_pods.go:61] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running
	I1026 15:16:11.486363  894796 system_pods.go:61] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:11.486368  894796 system_pods.go:61] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running
	I1026 15:16:11.486373  894796 system_pods.go:61] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running
	I1026 15:16:11.486377  894796 system_pods.go:61] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:11.486382  894796 system_pods.go:61] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running
	I1026 15:16:11.486389  894796 system_pods.go:61] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:16:11.486395  894796 system_pods.go:74] duration metric: took 6.869787ms to wait for pod list to return data ...
	I1026 15:16:11.486403  894796 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:16:11.493961  894796 default_sa.go:45] found service account: "default"
	I1026 15:16:11.493984  894796 default_sa.go:55] duration metric: took 7.575526ms for default service account to be created ...
	I1026 15:16:11.493995  894796 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:16:11.500654  894796 system_pods.go:86] 8 kube-system pods found
	I1026 15:16:11.500687  894796 system_pods.go:89] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:16:11.500721  894796 system_pods.go:89] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running
	I1026 15:16:11.500729  894796 system_pods.go:89] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:11.500733  894796 system_pods.go:89] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running
	I1026 15:16:11.500738  894796 system_pods.go:89] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running
	I1026 15:16:11.500742  894796 system_pods.go:89] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:11.500746  894796 system_pods.go:89] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running
	I1026 15:16:11.500751  894796 system_pods.go:89] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:16:11.500774  894796 retry.go:31] will retry after 299.846009ms: missing components: kube-dns
	I1026 15:16:11.814227  894796 system_pods.go:86] 8 kube-system pods found
	I1026 15:16:11.814254  894796 system_pods.go:89] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Running
	I1026 15:16:11.814261  894796 system_pods.go:89] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running
	I1026 15:16:11.814266  894796 system_pods.go:89] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:11.814276  894796 system_pods.go:89] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running
	I1026 15:16:11.814282  894796 system_pods.go:89] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running
	I1026 15:16:11.814286  894796 system_pods.go:89] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:11.814290  894796 system_pods.go:89] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running
	I1026 15:16:11.814293  894796 system_pods.go:89] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Running
	I1026 15:16:11.814300  894796 system_pods.go:126] duration metric: took 320.300292ms to wait for k8s-apps to be running ...
	I1026 15:16:11.814308  894796 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:16:11.814369  894796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:16:11.828196  894796 system_svc.go:56] duration metric: took 13.876921ms WaitForService to wait for kubelet
	I1026 15:16:11.828222  894796 kubeadm.go:586] duration metric: took 43.069632187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:16:11.828242  894796 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:16:11.834564  894796 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:16:11.834658  894796 node_conditions.go:123] node cpu capacity is 2
	I1026 15:16:11.834684  894796 node_conditions.go:105] duration metric: took 6.435977ms to run NodePressure ...
	I1026 15:16:11.834720  894796 start.go:241] waiting for startup goroutines ...
	I1026 15:16:11.834745  894796 start.go:246] waiting for cluster config update ...
	I1026 15:16:11.834775  894796 start.go:255] writing updated cluster config ...
	I1026 15:16:11.835085  894796 ssh_runner.go:195] Run: rm -f paused
	I1026 15:16:11.839262  894796 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:16:11.848778  894796 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rkx49" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:11.854098  894796 pod_ready.go:94] pod "coredns-66bc5c9577-rkx49" is "Ready"
	I1026 15:16:11.854168  894796 pod_ready.go:86] duration metric: took 5.311477ms for pod "coredns-66bc5c9577-rkx49" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:11.859800  894796 pod_ready.go:83] waiting for pod "etcd-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:11.865015  894796 pod_ready.go:94] pod "etcd-embed-certs-018497" is "Ready"
	I1026 15:16:11.865093  894796 pod_ready.go:86] duration metric: took 5.215303ms for pod "etcd-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:11.867380  894796 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:11.872794  894796 pod_ready.go:94] pod "kube-apiserver-embed-certs-018497" is "Ready"
	I1026 15:16:11.872873  894796 pod_ready.go:86] duration metric: took 5.42022ms for pod "kube-apiserver-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:11.875296  894796 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:12.243531  894796 pod_ready.go:94] pod "kube-controller-manager-embed-certs-018497" is "Ready"
	I1026 15:16:12.243559  894796 pod_ready.go:86] duration metric: took 368.20717ms for pod "kube-controller-manager-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:12.444220  894796 pod_ready.go:83] waiting for pod "kube-proxy-n7rjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:12.843258  894796 pod_ready.go:94] pod "kube-proxy-n7rjg" is "Ready"
	I1026 15:16:12.843289  894796 pod_ready.go:86] duration metric: took 399.041572ms for pod "kube-proxy-n7rjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:13.043816  894796 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:13.443411  894796 pod_ready.go:94] pod "kube-scheduler-embed-certs-018497" is "Ready"
	I1026 15:16:13.443441  894796 pod_ready.go:86] duration metric: took 399.597213ms for pod "kube-scheduler-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:13.443455  894796 pod_ready.go:40] duration metric: took 1.604160706s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:16:13.496008  894796 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:16:13.499207  894796 out.go:179] * Done! kubectl is now configured to use "embed-certs-018497" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:16:11 embed-certs-018497 crio[839]: time="2025-10-26T15:16:11.393354134Z" level=info msg="Created container 46430a249e1bfb6525ac849defa9ecb99a44ac1f3994f7664a5247d3ffa7dc29: kube-system/coredns-66bc5c9577-rkx49/coredns" id=6fbdaca0-1a85-4127-814d-209b7895a16d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:16:11 embed-certs-018497 crio[839]: time="2025-10-26T15:16:11.394628192Z" level=info msg="Starting container: 46430a249e1bfb6525ac849defa9ecb99a44ac1f3994f7664a5247d3ffa7dc29" id=e9267ac9-9301-49a0-bcb9-ed32c1689eab name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:16:11 embed-certs-018497 crio[839]: time="2025-10-26T15:16:11.401405834Z" level=info msg="Started container" PID=1723 containerID=46430a249e1bfb6525ac849defa9ecb99a44ac1f3994f7664a5247d3ffa7dc29 description=kube-system/coredns-66bc5c9577-rkx49/coredns id=e9267ac9-9301-49a0-bcb9-ed32c1689eab name=/runtime.v1.RuntimeService/StartContainer sandboxID=efc56a428a4cf36d4f7f9f2cc235db21866d90780c2d55b55cafb74f2e870e45
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.006940374Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f8880c06-f972-483e-a0ac-a995d0617b39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.007188302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.01722729Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:82c75d387ae3ec294c2447736316dd027c93ad3afc1acd0a242ea9bf67aadab2 UID:3e2e9efa-2562-4274-8e98-1f31c6a5039f NetNS:/var/run/netns/5f150323-4f06-4aae-a31f-c003025803a7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cc58}] Aliases:map[]}"
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.017271434Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.028996389Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:82c75d387ae3ec294c2447736316dd027c93ad3afc1acd0a242ea9bf67aadab2 UID:3e2e9efa-2562-4274-8e98-1f31c6a5039f NetNS:/var/run/netns/5f150323-4f06-4aae-a31f-c003025803a7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cc58}] Aliases:map[]}"
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.029186134Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.031895801Z" level=info msg="Ran pod sandbox 82c75d387ae3ec294c2447736316dd027c93ad3afc1acd0a242ea9bf67aadab2 with infra container: default/busybox/POD" id=f8880c06-f972-483e-a0ac-a995d0617b39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.033723596Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e97e7be7-d020-4088-b503-5a9d5df938b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.033993769Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e97e7be7-d020-4088-b503-5a9d5df938b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.034145336Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e97e7be7-d020-4088-b503-5a9d5df938b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.037660575Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=53d0285b-70c2-4f68-ada7-6068b3f8015e name=/runtime.v1.ImageService/PullImage
	Oct 26 15:16:14 embed-certs-018497 crio[839]: time="2025-10-26T15:16:14.039731941Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.911063814Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=53d0285b-70c2-4f68-ada7-6068b3f8015e name=/runtime.v1.ImageService/PullImage
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.911767198Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd35e708-4ab5-4a47-b880-0fd88097c593 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.914051597Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00dd062d-b201-4919-92de-8b76c54cca34 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.919599794Z" level=info msg="Creating container: default/busybox/busybox" id=bb8fb64e-61c6-4798-9ed4-32d8b07e96fc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.91972727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.924317828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.924880755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.938738845Z" level=info msg="Created container d4ee9809e4770834d162bddde6b671241212f9b898cc7d4d236b00d9612c2b6e: default/busybox/busybox" id=bb8fb64e-61c6-4798-9ed4-32d8b07e96fc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.939836835Z" level=info msg="Starting container: d4ee9809e4770834d162bddde6b671241212f9b898cc7d4d236b00d9612c2b6e" id=486e8331-8ca1-49a7-a2ca-351f06bcb735 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:16:15 embed-certs-018497 crio[839]: time="2025-10-26T15:16:15.941933326Z" level=info msg="Started container" PID=1778 containerID=d4ee9809e4770834d162bddde6b671241212f9b898cc7d4d236b00d9612c2b6e description=default/busybox/busybox id=486e8331-8ca1-49a7-a2ca-351f06bcb735 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82c75d387ae3ec294c2447736316dd027c93ad3afc1acd0a242ea9bf67aadab2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d4ee9809e4770       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   82c75d387ae3e       busybox                                      default
	46430a249e1bf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   efc56a428a4cf       coredns-66bc5c9577-rkx49                     kube-system
	90a584795b0ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   340635445a6d9       storage-provisioner                          kube-system
	31b81aa8472c1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   6ee4278823689       kube-proxy-n7rjg                             kube-system
	2ba280ec43319       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   d46eee208ea9b       kindnet-gxpz7                                kube-system
	b72f6be68363c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   88f3f94af321e       kube-apiserver-embed-certs-018497            kube-system
	d33f30cab6a07       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   72ed3f88d6215       etcd-embed-certs-018497                      kube-system
	5fb431d595b91       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   b4be6df99890d       kube-scheduler-embed-certs-018497            kube-system
	4ad0c51456ed5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   eb40b75e9dfe4       kube-controller-manager-embed-certs-018497   kube-system
	
	
	==> coredns [46430a249e1bfb6525ac849defa9ecb99a44ac1f3994f7664a5247d3ffa7dc29] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35512 - 23205 "HINFO IN 5706061115593979514.8932891325782615818. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013989612s
	
	
	==> describe nodes <==
	Name:               embed-certs-018497
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-018497
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-018497
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_15_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:15:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-018497
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:16:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:16:25 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:16:25 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:16:25 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:16:25 +0000   Sun, 26 Oct 2025 15:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-018497
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                072f2fa1-40d7-443d-9b77-e971842fc752
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-rkx49                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-018497                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-gxpz7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-018497             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-018497    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-n7rjg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-018497             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 54s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node embed-certs-018497 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node embed-certs-018497 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node embed-certs-018497 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s   node-controller  Node embed-certs-018497 event: Registered Node embed-certs-018497 in Controller
	  Normal   NodeReady                15s   kubelet          Node embed-certs-018497 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 14:52] overlayfs: idmapped layers are currently not supported
	[Oct26 14:53] overlayfs: idmapped layers are currently not supported
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d33f30cab6a07869c2d93ded1c0789a487b4ab6565af713dbed783e288b5e104] <==
	{"level":"warn","ts":"2025-10-26T15:15:19.268870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.292162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.313245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.339115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.371057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.388548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.418487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.474535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.480033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.522696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.554189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.579704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.637574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.657432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.682931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.700146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.725853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.755041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.800770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.824037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.873929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.906419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.928443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:19.954149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:15:20.104820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:16:25 up  4:58,  0 user,  load average: 1.21, 2.81, 2.75
	Linux embed-certs-018497 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ba280ec4331994952a5125646ee32607acd716e81e218ad14433c6ddc581e58] <==
	I1026 15:15:30.227375       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:15:30.227620       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:15:30.227743       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:15:30.227762       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:15:30.227773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:15:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:15:30.429639       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:15:30.429656       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:15:30.429663       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:15:30.430457       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:16:00.430923       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:16:00.431082       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:16:00.431276       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:16:00.431335       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:16:01.930707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:16:01.930743       1 metrics.go:72] Registering metrics
	I1026 15:16:01.930815       1 controller.go:711] "Syncing nftables rules"
	I1026 15:16:10.434567       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:16:10.434609       1 main.go:301] handling current node
	I1026 15:16:20.429473       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:16:20.429619       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b72f6be68363cb0c5bb8e3040f0ac809f9743c8c73d2586487af5b7d423ec6a9] <==
	I1026 15:15:21.112465       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:15:21.112497       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:15:21.128593       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:15:21.130973       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:15:21.142261       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:15:21.142531       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:15:21.280406       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:15:21.694764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:15:21.703506       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:15:21.703536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:15:22.426156       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:15:22.483957       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:15:22.619141       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:15:22.627381       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 15:15:22.628529       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:15:22.634452       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:15:22.850115       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:15:23.605889       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:15:23.635893       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:15:23.657890       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:15:28.200494       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:15:28.653294       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:15:28.662231       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:15:28.907580       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1026 15:16:23.848361       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:33838: use of closed network connection
	
	
	==> kube-controller-manager [4ad0c51456ed57fb8cbfcbcd9c371c0be510d2626c21646c381ecd2e82cd7add] <==
	I1026 15:15:27.854967       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:15:27.859686       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:15:27.869955       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:15:27.885801       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:15:27.890424       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:15:27.891480       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:15:27.893673       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:15:27.893728       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:15:27.894211       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:15:27.894281       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:15:27.894311       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:15:27.894481       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:15:27.894628       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:15:27.894663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:15:27.897171       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 15:15:27.898397       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:15:27.903770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:15:27.903925       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:15:27.912471       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:15:27.912552       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:15:27.912577       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:15:27.912596       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:15:27.912603       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:15:27.921900       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-018497" podCIDRs=["10.244.0.0/24"]
	I1026 15:16:12.852305       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [31b81aa8472c1e6aa50ee16514ea01dca8e527e42141d891e6b084b9c0f79b51] <==
	I1026 15:15:30.489118       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:15:30.582226       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:15:30.682819       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:15:30.682855       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:15:30.682952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:15:30.707170       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:15:30.707307       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:15:30.711957       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:15:30.712322       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:15:30.712869       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:15:30.714305       1 config.go:200] "Starting service config controller"
	I1026 15:15:30.714326       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:15:30.714354       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:15:30.714358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:15:30.714368       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:15:30.714372       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:15:30.714990       1 config.go:309] "Starting node config controller"
	I1026 15:15:30.714996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:15:30.715009       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:15:30.814448       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:15:30.814448       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:15:30.814487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5fb431d595b9114dbd329241f8b4df7062c65644b1d99f60f90fe3987fbaed8d] <==
	E1026 15:15:21.104467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:15:21.108187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:15:21.111086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 15:15:21.111537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:15:21.111634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:15:21.111705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:15:21.111758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:15:21.111838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:15:21.111871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:15:21.111906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:15:21.111943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:15:21.111970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:15:21.112023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:15:21.112082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:15:21.117038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:15:21.117400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:15:21.117526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:15:21.117583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:15:21.117624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:15:21.971825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:15:22.018546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 15:15:22.091823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:15:22.106793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:15:22.117596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1026 15:15:24.586773       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:15:28 embed-certs-018497 kubelet[1291]: E1026 15:15:28.431994    1291 projected.go:196] Error preparing data for projected volume kube-api-access-swjcj for pod kube-system/kindnet-gxpz7: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:28 embed-certs-018497 kubelet[1291]: E1026 15:15:28.432088    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3a7a936-8d0c-41e8-a4eb-f956f18abe3e-kube-api-access-swjcj podName:f3a7a936-8d0c-41e8-a4eb-f956f18abe3e nodeName:}" failed. No retries permitted until 2025-10-26 15:15:28.93206202 +0000 UTC m=+5.470717728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-swjcj" (UniqueName: "kubernetes.io/projected/f3a7a936-8d0c-41e8-a4eb-f956f18abe3e-kube-api-access-swjcj") pod "kindnet-gxpz7" (UID: "f3a7a936-8d0c-41e8-a4eb-f956f18abe3e") : configmap "kube-root-ca.crt" not found
	Oct 26 15:15:28 embed-certs-018497 kubelet[1291]: E1026 15:15:28.434447    1291 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:28 embed-certs-018497 kubelet[1291]: E1026 15:15:28.434497    1291 projected.go:196] Error preparing data for projected volume kube-api-access-dzdrk for pod kube-system/kube-proxy-n7rjg: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:28 embed-certs-018497 kubelet[1291]: E1026 15:15:28.434610    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f86e937-34ab-4404-821d-7034a88cf390-kube-api-access-dzdrk podName:6f86e937-34ab-4404-821d-7034a88cf390 nodeName:}" failed. No retries permitted until 2025-10-26 15:15:28.934582007 +0000 UTC m=+5.473237715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzdrk" (UniqueName: "kubernetes.io/projected/6f86e937-34ab-4404-821d-7034a88cf390-kube-api-access-dzdrk") pod "kube-proxy-n7rjg" (UID: "6f86e937-34ab-4404-821d-7034a88cf390") : configmap "kube-root-ca.crt" not found
	Oct 26 15:15:29 embed-certs-018497 kubelet[1291]: E1026 15:15:29.029133    1291 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:29 embed-certs-018497 kubelet[1291]: E1026 15:15:29.029176    1291 projected.go:196] Error preparing data for projected volume kube-api-access-swjcj for pod kube-system/kindnet-gxpz7: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:29 embed-certs-018497 kubelet[1291]: E1026 15:15:29.029240    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3a7a936-8d0c-41e8-a4eb-f956f18abe3e-kube-api-access-swjcj podName:f3a7a936-8d0c-41e8-a4eb-f956f18abe3e nodeName:}" failed. No retries permitted until 2025-10-26 15:15:30.029217134 +0000 UTC m=+6.567872850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-swjcj" (UniqueName: "kubernetes.io/projected/f3a7a936-8d0c-41e8-a4eb-f956f18abe3e-kube-api-access-swjcj") pod "kindnet-gxpz7" (UID: "f3a7a936-8d0c-41e8-a4eb-f956f18abe3e") : configmap "kube-root-ca.crt" not found
	Oct 26 15:15:29 embed-certs-018497 kubelet[1291]: E1026 15:15:29.029808    1291 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:29 embed-certs-018497 kubelet[1291]: E1026 15:15:29.029826    1291 projected.go:196] Error preparing data for projected volume kube-api-access-dzdrk for pod kube-system/kube-proxy-n7rjg: configmap "kube-root-ca.crt" not found
	Oct 26 15:15:29 embed-certs-018497 kubelet[1291]: E1026 15:15:29.029886    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f86e937-34ab-4404-821d-7034a88cf390-kube-api-access-dzdrk podName:6f86e937-34ab-4404-821d-7034a88cf390 nodeName:}" failed. No retries permitted until 2025-10-26 15:15:30.029871508 +0000 UTC m=+6.568527216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzdrk" (UniqueName: "kubernetes.io/projected/6f86e937-34ab-4404-821d-7034a88cf390-kube-api-access-dzdrk") pod "kube-proxy-n7rjg" (UID: "6f86e937-34ab-4404-821d-7034a88cf390") : configmap "kube-root-ca.crt" not found
	Oct 26 15:15:30 embed-certs-018497 kubelet[1291]: I1026 15:15:30.037773    1291 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 15:15:30 embed-certs-018497 kubelet[1291]: W1026 15:15:30.362847    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/crio-6ee4278823689b47facc321aca4431f92d8d7e6716e0eb0c759b4bf385e9316f WatchSource:0}: Error finding container 6ee4278823689b47facc321aca4431f92d8d7e6716e0eb0c759b4bf385e9316f: Status 404 returned error can't find the container with id 6ee4278823689b47facc321aca4431f92d8d7e6716e0eb0c759b4bf385e9316f
	Oct 26 15:15:30 embed-certs-018497 kubelet[1291]: I1026 15:15:30.736591    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n7rjg" podStartSLOduration=2.736570743 podStartE2EDuration="2.736570743s" podCreationTimestamp="2025-10-26 15:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:15:30.719953918 +0000 UTC m=+7.258609634" watchObservedRunningTime="2025-10-26 15:15:30.736570743 +0000 UTC m=+7.275226451"
	Oct 26 15:15:34 embed-certs-018497 kubelet[1291]: I1026 15:15:34.603584    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxpz7" podStartSLOduration=6.603567293 podStartE2EDuration="6.603567293s" podCreationTimestamp="2025-10-26 15:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:15:30.737220736 +0000 UTC m=+7.275876452" watchObservedRunningTime="2025-10-26 15:15:34.603567293 +0000 UTC m=+11.142223009"
	Oct 26 15:16:10 embed-certs-018497 kubelet[1291]: I1026 15:16:10.956149    1291 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: I1026 15:16:11.060790    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnww\" (UniqueName: \"kubernetes.io/projected/8bd8fd16-8a60-4e7c-bf17-b260091ded9d-kube-api-access-jsnww\") pod \"storage-provisioner\" (UID: \"8bd8fd16-8a60-4e7c-bf17-b260091ded9d\") " pod="kube-system/storage-provisioner"
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: I1026 15:16:11.061082    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f47c66b-f9f5-4983-94d0-849c70d61ba4-config-volume\") pod \"coredns-66bc5c9577-rkx49\" (UID: \"7f47c66b-f9f5-4983-94d0-849c70d61ba4\") " pod="kube-system/coredns-66bc5c9577-rkx49"
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: I1026 15:16:11.061126    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8bd8fd16-8a60-4e7c-bf17-b260091ded9d-tmp\") pod \"storage-provisioner\" (UID: \"8bd8fd16-8a60-4e7c-bf17-b260091ded9d\") " pod="kube-system/storage-provisioner"
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: I1026 15:16:11.061164    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstxd\" (UniqueName: \"kubernetes.io/projected/7f47c66b-f9f5-4983-94d0-849c70d61ba4-kube-api-access-jstxd\") pod \"coredns-66bc5c9577-rkx49\" (UID: \"7f47c66b-f9f5-4983-94d0-849c70d61ba4\") " pod="kube-system/coredns-66bc5c9577-rkx49"
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: W1026 15:16:11.317660    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/crio-340635445a6d9171e769dbea893a312242569525ec08c17c51f9563303708a56 WatchSource:0}: Error finding container 340635445a6d9171e769dbea893a312242569525ec08c17c51f9563303708a56: Status 404 returned error can't find the container with id 340635445a6d9171e769dbea893a312242569525ec08c17c51f9563303708a56
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: W1026 15:16:11.331447    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/crio-efc56a428a4cf36d4f7f9f2cc235db21866d90780c2d55b55cafb74f2e870e45 WatchSource:0}: Error finding container efc56a428a4cf36d4f7f9f2cc235db21866d90780c2d55b55cafb74f2e870e45: Status 404 returned error can't find the container with id efc56a428a4cf36d4f7f9f2cc235db21866d90780c2d55b55cafb74f2e870e45
	Oct 26 15:16:11 embed-certs-018497 kubelet[1291]: I1026 15:16:11.828848    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.828826822 podStartE2EDuration="42.828826822s" podCreationTimestamp="2025-10-26 15:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:16:11.799458152 +0000 UTC m=+48.338113860" watchObservedRunningTime="2025-10-26 15:16:11.828826822 +0000 UTC m=+48.367482547"
	Oct 26 15:16:13 embed-certs-018497 kubelet[1291]: I1026 15:16:13.692955    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rkx49" podStartSLOduration=44.692934133 podStartE2EDuration="44.692934133s" podCreationTimestamp="2025-10-26 15:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:16:11.829475265 +0000 UTC m=+48.368130972" watchObservedRunningTime="2025-10-26 15:16:13.692934133 +0000 UTC m=+50.231589849"
	Oct 26 15:16:13 embed-certs-018497 kubelet[1291]: I1026 15:16:13.780158    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjv4\" (UniqueName: \"kubernetes.io/projected/3e2e9efa-2562-4274-8e98-1f31c6a5039f-kube-api-access-2jjv4\") pod \"busybox\" (UID: \"3e2e9efa-2562-4274-8e98-1f31c6a5039f\") " pod="default/busybox"
	
	
	==> storage-provisioner [90a584795b0ce7c9c5343513b975f0c31f505edf6ce33d5f318aea555b49d8b0] <==
	I1026 15:16:11.383572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:16:11.401720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:16:11.401838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:16:11.405555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:11.415297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:16:11.417214       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:16:11.417469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-018497_02c0b2b8-b7e2-4ade-a1d4-0d89e367c249!
	I1026 15:16:11.420963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16612a96-da08-4714-84ae-ba8e387bd6f2", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-018497_02c0b2b8-b7e2-4ade-a1d4-0d89e367c249 became leader
	W1026 15:16:11.428975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:11.432533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:16:11.520819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-018497_02c0b2b8-b7e2-4ade-a1d4-0d89e367c249!
	W1026 15:16:13.435558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:13.440923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:15.444729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:15.449380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:17.452842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:17.459939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:19.462631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:19.467010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:21.469961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:21.474497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:23.478003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:23.484107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:25.487980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:16:25.492886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-018497 -n embed-certs-018497
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-018497 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-018497 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-018497 --alsologtostderr -v=1: exit status 80 (1.951362802s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-018497 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:17:52.059677  904662 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:17:52.059947  904662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:17:52.059983  904662 out.go:374] Setting ErrFile to fd 2...
	I1026 15:17:52.060006  904662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:17:52.060378  904662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:17:52.060814  904662 out.go:368] Setting JSON to false
	I1026 15:17:52.060879  904662 mustload.go:65] Loading cluster: embed-certs-018497
	I1026 15:17:52.061311  904662 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:52.061828  904662 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:17:52.080610  904662 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:17:52.081000  904662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:17:52.141995  904662 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 15:17:52.132346911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:17:52.142711  904662 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-018497 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:17:52.146209  904662 out.go:179] * Pausing node embed-certs-018497 ... 
	I1026 15:17:52.149814  904662 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:17:52.150184  904662 ssh_runner.go:195] Run: systemctl --version
	I1026 15:17:52.150236  904662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:17:52.169153  904662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:17:52.277276  904662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:17:52.290830  904662 pause.go:52] kubelet running: true
	I1026 15:17:52.290912  904662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:17:52.540241  904662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:17:52.540334  904662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:17:52.612893  904662 cri.go:89] found id: "fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109"
	I1026 15:17:52.612917  904662 cri.go:89] found id: "e43d91bb5e3e6317a58891cd2e1ffa985b52cdbecb3fc66c4cb6d88beed6bb9a"
	I1026 15:17:52.612922  904662 cri.go:89] found id: "2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1"
	I1026 15:17:52.612926  904662 cri.go:89] found id: "03db0d606c127fce8efea05cc20d5e89e56ed82af785cf24f1a16c72af21e767"
	I1026 15:17:52.612930  904662 cri.go:89] found id: "a544d2cd71d6e7dbf96a6029fcb84048899600d50410fd953e7e9825ae6d54e4"
	I1026 15:17:52.612935  904662 cri.go:89] found id: "090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef"
	I1026 15:17:52.612939  904662 cri.go:89] found id: "409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45"
	I1026 15:17:52.612942  904662 cri.go:89] found id: "3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3"
	I1026 15:17:52.612945  904662 cri.go:89] found id: "d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540"
	I1026 15:17:52.612953  904662 cri.go:89] found id: "ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	I1026 15:17:52.612959  904662 cri.go:89] found id: "65acd0d0bd4152422d5b3b852f04705e7b5bc36efce35381af401cfd45e8efe0"
	I1026 15:17:52.612962  904662 cri.go:89] found id: ""
	I1026 15:17:52.613013  904662 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:17:52.632577  904662 retry.go:31] will retry after 209.404153ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:17:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:17:52.843130  904662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:17:52.859182  904662 pause.go:52] kubelet running: false
	I1026 15:17:52.859251  904662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:17:53.051707  904662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:17:53.051789  904662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:17:53.126648  904662 cri.go:89] found id: "fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109"
	I1026 15:17:53.126715  904662 cri.go:89] found id: "e43d91bb5e3e6317a58891cd2e1ffa985b52cdbecb3fc66c4cb6d88beed6bb9a"
	I1026 15:17:53.126735  904662 cri.go:89] found id: "2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1"
	I1026 15:17:53.126753  904662 cri.go:89] found id: "03db0d606c127fce8efea05cc20d5e89e56ed82af785cf24f1a16c72af21e767"
	I1026 15:17:53.126763  904662 cri.go:89] found id: "a544d2cd71d6e7dbf96a6029fcb84048899600d50410fd953e7e9825ae6d54e4"
	I1026 15:17:53.126768  904662 cri.go:89] found id: "090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef"
	I1026 15:17:53.126771  904662 cri.go:89] found id: "409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45"
	I1026 15:17:53.126775  904662 cri.go:89] found id: "3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3"
	I1026 15:17:53.126778  904662 cri.go:89] found id: "d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540"
	I1026 15:17:53.126784  904662 cri.go:89] found id: "ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	I1026 15:17:53.126788  904662 cri.go:89] found id: "65acd0d0bd4152422d5b3b852f04705e7b5bc36efce35381af401cfd45e8efe0"
	I1026 15:17:53.126791  904662 cri.go:89] found id: ""
	I1026 15:17:53.126846  904662 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:17:53.139093  904662 retry.go:31] will retry after 516.855758ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:17:53Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:17:53.656897  904662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:17:53.670354  904662 pause.go:52] kubelet running: false
	I1026 15:17:53.670477  904662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:17:53.836053  904662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:17:53.836128  904662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:17:53.918239  904662 cri.go:89] found id: "fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109"
	I1026 15:17:53.918266  904662 cri.go:89] found id: "e43d91bb5e3e6317a58891cd2e1ffa985b52cdbecb3fc66c4cb6d88beed6bb9a"
	I1026 15:17:53.918271  904662 cri.go:89] found id: "2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1"
	I1026 15:17:53.918275  904662 cri.go:89] found id: "03db0d606c127fce8efea05cc20d5e89e56ed82af785cf24f1a16c72af21e767"
	I1026 15:17:53.918278  904662 cri.go:89] found id: "a544d2cd71d6e7dbf96a6029fcb84048899600d50410fd953e7e9825ae6d54e4"
	I1026 15:17:53.918282  904662 cri.go:89] found id: "090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef"
	I1026 15:17:53.918291  904662 cri.go:89] found id: "409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45"
	I1026 15:17:53.918296  904662 cri.go:89] found id: "3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3"
	I1026 15:17:53.918299  904662 cri.go:89] found id: "d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540"
	I1026 15:17:53.918306  904662 cri.go:89] found id: "ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	I1026 15:17:53.918309  904662 cri.go:89] found id: "65acd0d0bd4152422d5b3b852f04705e7b5bc36efce35381af401cfd45e8efe0"
	I1026 15:17:53.918312  904662 cri.go:89] found id: ""
	I1026 15:17:53.918370  904662 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:17:53.933795  904662 out.go:203] 
	W1026 15:17:53.936787  904662 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:17:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:17:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:17:53.936808  904662 out.go:285] * 
	* 
	W1026 15:17:53.943947  904662 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:17:53.947028  904662 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-018497 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-018497
helpers_test.go:243: (dbg) docker inspect embed-certs-018497:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad",
	        "Created": "2025-10-26T15:15:02.876896856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 899040,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:16:39.200569583Z",
	            "FinishedAt": "2025-10-26T15:16:37.952780193Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/hosts",
	        "LogPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad-json.log",
	        "Name": "/embed-certs-018497",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-018497:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-018497",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad",
	                "LowerDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-018497",
	                "Source": "/var/lib/docker/volumes/embed-certs-018497/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-018497",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-018497",
	                "name.minikube.sigs.k8s.io": "embed-certs-018497",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6047482d4608b7dffb7e2120b773ca111b0ce8fd15af0214cbd6beae3491a7ba",
	            "SandboxKey": "/var/run/docker/netns/6047482d4608",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-018497": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:75:d9:12:8b:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d6626fff9fc6f2eadb00ab3ddc73eb8fae0b42c47b2901a5327d56ab6e3bb96",
	                    "EndpointID": "f2dd66347c0a87661f3d46251b3f5cfe8a03c726400d0dc1200eaae1d63da4aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-018497",
	                        "bf916fec8d46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497: exit status 2 (392.023224ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-018497 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-018497 logs -n 25: (1.416022802s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-env-969063                                                                                                                                                                                                                   │ force-systemd-env-969063     │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ cert-options-209492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ -p cert-options-209492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p old-k8s-version-304880 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:14 UTC │
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                                                                                               │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                                                                                               │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                                                                                                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:16:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:16:47.438975  900582 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:16:47.439214  900582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:16:47.439243  900582 out.go:374] Setting ErrFile to fd 2...
	I1026 15:16:47.439261  900582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:16:47.439557  900582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:16:47.440034  900582 out.go:368] Setting JSON to false
	I1026 15:16:47.441672  900582 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17960,"bootTime":1761473848,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:16:47.441782  900582 start.go:141] virtualization:  
	I1026 15:16:47.448296  900582 out.go:179] * [no-preload-954807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:16:47.452145  900582 notify.go:220] Checking for updates...
	I1026 15:16:47.455463  900582 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:16:47.458881  900582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:16:47.462259  900582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:16:47.466254  900582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:16:47.469785  900582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:16:47.473245  900582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:16:47.477312  900582 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:16:47.477512  900582 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:16:47.525877  900582 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:16:47.526078  900582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:16:47.638929  900582 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:16:47.626362012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:16:47.639043  900582 docker.go:318] overlay module found
	I1026 15:16:47.643996  900582 out.go:179] * Using the docker driver based on user configuration
	I1026 15:16:47.647001  900582 start.go:305] selected driver: docker
	I1026 15:16:47.647027  900582 start.go:925] validating driver "docker" against <nil>
	I1026 15:16:47.647042  900582 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:16:47.653368  900582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:16:47.760227  900582 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:16:47.74923191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:16:47.760401  900582 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:16:47.760646  900582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:16:47.764088  900582 out.go:179] * Using Docker driver with root privileges
	I1026 15:16:47.767196  900582 cni.go:84] Creating CNI manager for ""
	I1026 15:16:47.767281  900582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:16:47.767299  900582 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:16:47.767384  900582 start.go:349] cluster config:
	{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:16:47.771258  900582 out.go:179] * Starting "no-preload-954807" primary control-plane node in "no-preload-954807" cluster
	I1026 15:16:47.774198  900582 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:16:47.777287  900582 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:16:47.780255  900582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:16:47.780408  900582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:16:47.780449  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json: {Name:mk898ca9db1ad5155ef5b61b472cca12dffb31bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:47.780638  900582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:16:47.783396  900582 cache.go:107] acquiring lock: {Name:mkbe2086c35e9fcbe8c03bdef4b41f05ca228154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.783536  900582 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:16:47.783552  900582 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.95011ms
	I1026 15:16:47.783614  900582 cache.go:107] acquiring lock: {Name:mk2325fad129f4b7d5aa09cccfdaa3da809a73fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.783858  900582 cache.go:107] acquiring lock: {Name:mk54c57481d4cb891842b1b352451c8a69a47281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.784147  900582 cache.go:107] acquiring lock: {Name:mk5a8cbd33cc84011ebd29296028bb78893eefc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.784260  900582 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:16:47.784296  900582 cache.go:107] acquiring lock: {Name:mkef4d9c96ab97f5a848fa8d925b343812fa37ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.784900  900582 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:47.785795  900582 cache.go:107] acquiring lock: {Name:mkaf3dfd27f1d15aad668c191c7cc85c71d2c9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.785892  900582 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:47.786066  900582 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:47.786192  900582 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:47.786635  900582 cache.go:107] acquiring lock: {Name:mk964a36cda2ac1ad4a9006d14be02c6bd71c41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.786725  900582 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:47.784963  900582 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:16:47.787087  900582 cache.go:107] acquiring lock: {Name:mkc8d2557eb259bb5390e2f2db4396a6aec79411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.787179  900582 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:47.787784  900582 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:47.787866  900582 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:47.789123  900582 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:47.790011  900582 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:47.790087  900582 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:47.790238  900582 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:16:47.790343  900582 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:47.815259  900582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:16:47.815286  900582 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:16:47.815300  900582 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:16:47.815323  900582 start.go:360] acquireMachinesLock for no-preload-954807: {Name:mk3de11c10d64abd2c458c411445bde4bf32881c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.815442  900582 start.go:364] duration metric: took 98.972µs to acquireMachinesLock for "no-preload-954807"
	I1026 15:16:47.815475  900582 start.go:93] Provisioning new machine with config: &{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:16:47.815553  900582 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:16:46.334439  898916 cli_runner.go:164] Run: docker network inspect embed-certs-018497 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:16:46.364312  898916 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:16:46.368642  898916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:16:46.380513  898916 kubeadm.go:883] updating cluster {Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:16:46.380625  898916 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:16:46.380676  898916 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:16:46.427949  898916 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:16:46.428023  898916 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:16:46.428104  898916 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:16:46.466590  898916 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:16:46.466616  898916 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:16:46.466623  898916 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:16:46.466727  898916 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-018497 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:16:46.466810  898916 ssh_runner.go:195] Run: crio config
	I1026 15:16:46.553374  898916 cni.go:84] Creating CNI manager for ""
	I1026 15:16:46.553402  898916 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:16:46.553453  898916 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:16:46.553490  898916 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-018497 NodeName:embed-certs-018497 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:16:46.553679  898916 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-018497"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:16:46.553785  898916 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:16:46.563684  898916 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:16:46.563813  898916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:16:46.571917  898916 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:16:46.585506  898916 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:16:46.599180  898916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:16:46.614916  898916 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:16:46.619143  898916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:16:46.629845  898916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:16:46.771997  898916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:16:46.791474  898916 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497 for IP: 192.168.76.2
	I1026 15:16:46.791499  898916 certs.go:195] generating shared ca certs ...
	I1026 15:16:46.791515  898916 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:46.791657  898916 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:16:46.791705  898916 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:16:46.791718  898916 certs.go:257] generating profile certs ...
	I1026 15:16:46.791803  898916 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.key
	I1026 15:16:46.791861  898916 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key.ac97108c
	I1026 15:16:46.791905  898916 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key
	I1026 15:16:46.792022  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:16:46.792054  898916 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:16:46.792065  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:16:46.792094  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:16:46.792119  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:16:46.792153  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:16:46.792199  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:16:46.792824  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:16:46.868148  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:16:46.915278  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:16:46.980645  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:16:47.023403  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:16:47.047146  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:16:47.093608  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:16:47.114107  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:16:47.138334  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:16:47.161291  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:16:47.179634  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:16:47.198075  898916 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:16:47.211369  898916 ssh_runner.go:195] Run: openssl version
	I1026 15:16:47.218536  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:16:47.228537  898916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:16:47.232816  898916 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:16:47.232881  898916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:16:47.278128  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:16:47.289882  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:16:47.301161  898916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:16:47.305608  898916 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:16:47.305675  898916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:16:47.349462  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:16:47.359981  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:16:47.369755  898916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:16:47.374995  898916 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:16:47.375062  898916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:16:47.426258  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:16:47.444239  898916 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:16:47.448442  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:16:47.556592  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:16:47.613479  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:16:47.717457  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:16:47.889157  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:16:47.969515  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:16:48.126011  898916 kubeadm.go:400] StartCluster: {Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:16:48.126103  898916 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:16:48.126176  898916 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:16:48.214574  898916 cri.go:89] found id: "090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef"
	I1026 15:16:48.214593  898916 cri.go:89] found id: "409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45"
	I1026 15:16:48.214597  898916 cri.go:89] found id: "3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3"
	I1026 15:16:48.214607  898916 cri.go:89] found id: "d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540"
	I1026 15:16:48.214612  898916 cri.go:89] found id: ""
	I1026 15:16:48.214669  898916 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:16:48.232829  898916 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:16:48Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:16:48.232918  898916 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:16:48.264848  898916 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:16:48.264866  898916 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:16:48.264925  898916 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:16:48.278302  898916 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:16:48.278711  898916 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-018497" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:16:48.278802  898916 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-018497" cluster setting kubeconfig missing "embed-certs-018497" context setting]
	I1026 15:16:48.279068  898916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:48.280384  898916 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:16:48.300997  898916 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:16:48.301092  898916 kubeadm.go:601] duration metric: took 36.219149ms to restartPrimaryControlPlane
	I1026 15:16:48.301126  898916 kubeadm.go:402] duration metric: took 175.100564ms to StartCluster
	I1026 15:16:48.301157  898916 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:48.301303  898916 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:16:48.302501  898916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:48.302696  898916 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:16:48.303862  898916 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:16:48.303974  898916 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:16:48.304179  898916 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-018497"
	I1026 15:16:48.304196  898916 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-018497"
	W1026 15:16:48.304203  898916 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:16:48.304228  898916 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:16:48.304672  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.309028  898916 addons.go:69] Setting dashboard=true in profile "embed-certs-018497"
	I1026 15:16:48.309126  898916 addons.go:238] Setting addon dashboard=true in "embed-certs-018497"
	W1026 15:16:48.309195  898916 addons.go:247] addon dashboard should already be in state true
	I1026 15:16:48.309520  898916 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:16:48.311505  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.316758  898916 addons.go:69] Setting default-storageclass=true in profile "embed-certs-018497"
	I1026 15:16:48.316787  898916 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-018497"
	I1026 15:16:48.317112  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.320784  898916 out.go:179] * Verifying Kubernetes components...
	I1026 15:16:48.326069  898916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:16:48.454353  898916 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:16:48.457799  898916 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:16:48.457818  898916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:16:48.457881  898916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:16:48.461402  898916 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:16:48.468831  898916 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:16:48.476362  898916 addons.go:238] Setting addon default-storageclass=true in "embed-certs-018497"
	W1026 15:16:48.476387  898916 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:16:48.476411  898916 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:16:48.476864  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.477058  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:16:48.477071  898916 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:16:48.477123  898916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:16:48.528200  898916 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:16:48.528226  898916 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:16:48.528290  898916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:16:48.631184  898916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:16:48.668943  898916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:16:48.669759  898916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:16:47.819158  900582 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:16:47.824939  900582 start.go:159] libmachine.API.Create for "no-preload-954807" (driver="docker")
	I1026 15:16:47.824995  900582 client.go:168] LocalClient.Create starting
	I1026 15:16:47.825075  900582 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:16:47.825121  900582 main.go:141] libmachine: Decoding PEM data...
	I1026 15:16:47.825139  900582 main.go:141] libmachine: Parsing certificate...
	I1026 15:16:47.825215  900582 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:16:47.835929  900582 main.go:141] libmachine: Decoding PEM data...
	I1026 15:16:47.835958  900582 main.go:141] libmachine: Parsing certificate...
	I1026 15:16:47.836386  900582 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:16:47.862641  900582 cli_runner.go:211] docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:16:47.862720  900582 network_create.go:284] running [docker network inspect no-preload-954807] to gather additional debugging logs...
	I1026 15:16:47.862740  900582 cli_runner.go:164] Run: docker network inspect no-preload-954807
	W1026 15:16:47.894970  900582 cli_runner.go:211] docker network inspect no-preload-954807 returned with exit code 1
	I1026 15:16:47.894998  900582 network_create.go:287] error running [docker network inspect no-preload-954807]: docker network inspect no-preload-954807: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-954807 not found
	I1026 15:16:47.895011  900582 network_create.go:289] output of [docker network inspect no-preload-954807]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-954807 not found
	
	** /stderr **
	I1026 15:16:47.895102  900582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:16:47.925274  900582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:16:47.925643  900582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:16:47.926051  900582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:16:47.926411  900582 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5d6626fff9fc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:f0:61:6a:ff:0a} reservation:<nil>}
	I1026 15:16:47.926904  900582 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d38a00}
	I1026 15:16:47.926930  900582 network_create.go:124] attempt to create docker network no-preload-954807 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 15:16:47.926987  900582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-954807 no-preload-954807
	I1026 15:16:48.076108  900582 network_create.go:108] docker network no-preload-954807 192.168.85.0/24 created
	I1026 15:16:48.076188  900582 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-954807" container
	I1026 15:16:48.076348  900582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:16:48.120590  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1026 15:16:48.121239  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:16:48.121719  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:16:48.123502  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:16:48.131215  900582 cli_runner.go:164] Run: docker volume create no-preload-954807 --label name.minikube.sigs.k8s.io=no-preload-954807 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:16:48.140271  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:16:48.145173  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:16:48.163743  900582 oci.go:103] Successfully created a docker volume no-preload-954807
	I1026 15:16:48.163829  900582 cli_runner.go:164] Run: docker run --rm --name no-preload-954807-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-954807 --entrypoint /usr/bin/test -v no-preload-954807:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:16:48.196307  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:16:48.199570  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:16:48.199596  900582 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 415.303003ms
	I1026 15:16:48.199608  900582 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:16:48.711128  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:16:48.711220  900582 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 929.508467ms
	I1026 15:16:48.711250  900582 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:16:49.123056  900582 oci.go:107] Successfully prepared a docker volume no-preload-954807
	I1026 15:16:49.123090  900582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1026 15:16:49.152920  900582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 15:16:49.153132  900582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:16:49.425011  900582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-954807 --name no-preload-954807 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-954807 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-954807 --network no-preload-954807 --ip 192.168.85.2 --volume no-preload-954807:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:16:49.456843  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:16:49.456878  900582 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.671087912s
	I1026 15:16:49.456894  900582 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:16:49.488266  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:16:49.518613  900582 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.734763587s
	I1026 15:16:49.518678  900582 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:16:49.518591  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:16:49.518719  900582 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.731637054s
	I1026 15:16:49.518738  900582 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:16:49.573588  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:16:49.573673  900582 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.78952901s
	I1026 15:16:49.573701  900582 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:16:50.120939  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Running}}
	I1026 15:16:50.154064  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:16:50.220945  900582 cli_runner.go:164] Run: docker exec no-preload-954807 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:16:50.312841  900582 oci.go:144] the created container "no-preload-954807" has a running status.
	I1026 15:16:50.312918  900582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa...
	I1026 15:16:50.695688  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:16:50.695722  900582 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.909091282s
	I1026 15:16:50.695755  900582 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:16:50.695774  900582 cache.go:87] Successfully saved all images to host disk.
	I1026 15:16:50.978239  900582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:16:51.006655  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:16:51.028961  900582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:16:51.028982  900582 kic_runner.go:114] Args: [docker exec --privileged no-preload-954807 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:16:51.086440  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:16:51.114793  900582 machine.go:93] provisionDockerMachine start ...
	I1026 15:16:51.114908  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:51.141076  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:51.141420  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:51.141442  900582 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:16:51.144856  900582 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:16:48.922868  898916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:16:48.976087  898916 node_ready.go:35] waiting up to 6m0s for node "embed-certs-018497" to be "Ready" ...
	I1026 15:16:49.043978  898916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:16:49.185041  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:16:49.185067  898916 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:16:49.213004  898916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:16:49.276944  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:16:49.276993  898916 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:16:49.364769  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:16:49.364802  898916 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:16:49.580995  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:16:49.581015  898916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:16:49.620897  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:16:49.620921  898916 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:16:49.646596  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:16:49.646618  898916 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:16:49.681901  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:16:49.681922  898916 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:16:49.722722  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:16:49.722752  898916 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:16:49.767102  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:16:49.767136  898916 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:16:49.786926  898916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:16:54.344348  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:16:54.344375  900582 ubuntu.go:182] provisioning hostname "no-preload-954807"
	I1026 15:16:54.344448  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:54.371481  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:54.371790  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:54.371807  900582 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-954807 && echo "no-preload-954807" | sudo tee /etc/hostname
	I1026 15:16:54.591514  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:16:54.591612  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:54.622465  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:54.622784  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:54.622807  900582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-954807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-954807/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-954807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:16:54.805286  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:16:54.805319  900582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:16:54.805349  900582 ubuntu.go:190] setting up certificates
	I1026 15:16:54.805359  900582 provision.go:84] configureAuth start
	I1026 15:16:54.805438  900582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:16:54.829832  900582 provision.go:143] copyHostCerts
	I1026 15:16:54.829898  900582 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:16:54.829908  900582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:16:54.829984  900582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:16:54.830108  900582 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:16:54.830112  900582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:16:54.830145  900582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:16:54.830205  900582 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:16:54.830209  900582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:16:54.830233  900582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:16:54.830294  900582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.no-preload-954807 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-954807]
	I1026 15:16:55.316992  900582 provision.go:177] copyRemoteCerts
	I1026 15:16:55.317062  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:16:55.317117  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:55.338274  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:55.450897  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:16:55.478426  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:16:55.503603  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:16:55.530864  900582 provision.go:87] duration metric: took 725.479989ms to configureAuth
	I1026 15:16:55.530933  900582 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:16:55.531157  900582 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:16:55.531308  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:55.554390  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:55.554707  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:55.554723  900582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:16:55.944242  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:16:55.944325  900582 machine.go:96] duration metric: took 4.829477893s to provisionDockerMachine
	I1026 15:16:55.944349  900582 client.go:171] duration metric: took 8.119346969s to LocalClient.Create
	I1026 15:16:55.944393  900582 start.go:167] duration metric: took 8.119459479s to libmachine.API.Create "no-preload-954807"
	I1026 15:16:55.944418  900582 start.go:293] postStartSetup for "no-preload-954807" (driver="docker")
	I1026 15:16:55.944440  900582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:16:55.944528  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:16:55.944589  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:55.973859  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.107348  900582 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:16:56.114250  900582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:16:56.114314  900582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:16:56.114325  900582 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:16:56.114386  900582 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:16:56.114465  900582 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:16:56.114566  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:16:56.127704  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:16:56.158761  900582 start.go:296] duration metric: took 214.315915ms for postStartSetup
	I1026 15:16:56.159197  900582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:16:56.186303  900582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:16:56.186579  900582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:16:56.186619  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:56.220832  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.326752  900582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:16:56.332334  900582 start.go:128] duration metric: took 8.516765399s to createHost
	I1026 15:16:56.332356  900582 start.go:83] releasing machines lock for "no-preload-954807", held for 8.516898414s
	I1026 15:16:56.332423  900582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:16:56.358383  900582 ssh_runner.go:195] Run: cat /version.json
	I1026 15:16:56.358429  900582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:16:56.358437  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:56.358498  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:56.393039  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.402093  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.509194  900582 ssh_runner.go:195] Run: systemctl --version
	I1026 15:16:56.648393  900582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:16:56.724425  900582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:16:56.732283  900582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:16:56.732355  900582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:16:56.786853  900582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 15:16:56.786879  900582 start.go:495] detecting cgroup driver to use...
	I1026 15:16:56.786912  900582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:16:56.786965  900582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:16:56.810474  900582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:16:56.830521  900582 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:16:56.830591  900582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:16:56.853525  900582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:16:56.887076  900582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:16:57.125818  900582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:16:57.352583  900582 docker.go:234] disabling docker service ...
	I1026 15:16:57.352739  900582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:16:57.396450  900582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:16:57.421065  900582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:16:57.647786  900582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:16:57.856648  900582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:16:57.875419  900582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:16:57.894352  900582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:16:57.894502  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.912105  900582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:16:57.912230  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.926801  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.937044  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.951196  900582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:16:57.961176  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.978450  900582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:58.000527  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:58.019486  900582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:16:58.031719  900582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:16:58.042704  900582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:16:58.233395  900582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:16:58.415605  900582 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:16:58.415684  900582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:16:58.420367  900582 start.go:563] Will wait 60s for crictl version
	I1026 15:16:58.420442  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:58.425279  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:16:58.453435  900582 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:16:58.453531  900582 ssh_runner.go:195] Run: crio --version
	I1026 15:16:58.495280  900582 ssh_runner.go:195] Run: crio --version
	I1026 15:16:58.540506  900582 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:16:55.718534  898916 node_ready.go:49] node "embed-certs-018497" is "Ready"
	I1026 15:16:55.718569  898916 node_ready.go:38] duration metric: took 6.742442993s for node "embed-certs-018497" to be "Ready" ...
	I1026 15:16:55.718584  898916 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:16:55.718642  898916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:16:58.384500  898916 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.340488423s)
	I1026 15:16:58.384558  898916 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.171534863s)
	I1026 15:16:58.578454  898916 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.791478642s)
	I1026 15:16:58.578598  898916 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.859942499s)
	I1026 15:16:58.578612  898916 api_server.go:72] duration metric: took 10.275894464s to wait for apiserver process to appear ...
	I1026 15:16:58.578619  898916 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:16:58.578637  898916 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:16:58.581612  898916 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-018497 addons enable metrics-server
	
	I1026 15:16:58.584212  898916 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1026 15:16:58.587126  898916 addons.go:514] duration metric: took 10.283146425s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 15:16:58.590025  898916 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:16:58.591547  898916 api_server.go:141] control plane version: v1.34.1
	I1026 15:16:58.591607  898916 api_server.go:131] duration metric: took 12.980902ms to wait for apiserver health ...
	I1026 15:16:58.591629  898916 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:16:58.595816  898916 system_pods.go:59] 8 kube-system pods found
	I1026 15:16:58.595850  898916 system_pods.go:61] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:16:58.595860  898916 system_pods.go:61] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:16:58.595867  898916 system_pods.go:61] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:58.595874  898916 system_pods.go:61] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:16:58.595880  898916 system_pods.go:61] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:16:58.595887  898916 system_pods.go:61] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:58.595894  898916 system_pods.go:61] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:16:58.595898  898916 system_pods.go:61] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Running
	I1026 15:16:58.595904  898916 system_pods.go:74] duration metric: took 4.257237ms to wait for pod list to return data ...
	I1026 15:16:58.595911  898916 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:16:58.598710  898916 default_sa.go:45] found service account: "default"
	I1026 15:16:58.598729  898916 default_sa.go:55] duration metric: took 2.811912ms for default service account to be created ...
	I1026 15:16:58.598738  898916 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:16:58.603993  898916 system_pods.go:86] 8 kube-system pods found
	I1026 15:16:58.604074  898916 system_pods.go:89] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:16:58.604097  898916 system_pods.go:89] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:16:58.604136  898916 system_pods.go:89] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:58.604164  898916 system_pods.go:89] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:16:58.604186  898916 system_pods.go:89] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:16:58.604241  898916 system_pods.go:89] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:58.604266  898916 system_pods.go:89] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:16:58.604282  898916 system_pods.go:89] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Running
	I1026 15:16:58.604303  898916 system_pods.go:126] duration metric: took 5.558388ms to wait for k8s-apps to be running ...
	I1026 15:16:58.604323  898916 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:16:58.604405  898916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:16:58.625012  898916 system_svc.go:56] duration metric: took 20.665862ms WaitForService to wait for kubelet
	I1026 15:16:58.625080  898916 kubeadm.go:586] duration metric: took 10.322359134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:16:58.625144  898916 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:16:58.628397  898916 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:16:58.628472  898916 node_conditions.go:123] node cpu capacity is 2
	I1026 15:16:58.628498  898916 node_conditions.go:105] duration metric: took 3.339778ms to run NodePressure ...
	I1026 15:16:58.628523  898916 start.go:241] waiting for startup goroutines ...
	I1026 15:16:58.628556  898916 start.go:246] waiting for cluster config update ...
	I1026 15:16:58.628584  898916 start.go:255] writing updated cluster config ...
	I1026 15:16:58.628961  898916 ssh_runner.go:195] Run: rm -f paused
	I1026 15:16:58.633896  898916 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:16:58.694826  898916 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rkx49" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:58.543654  900582 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:16:58.560151  900582 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:16:58.564690  900582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:16:58.576572  900582 kubeadm.go:883] updating cluster {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:16:58.576679  900582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:16:58.576766  900582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:16:58.611425  900582 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:16:58.611453  900582 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 15:16:58.611575  900582 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:58.612124  900582 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:16:58.612326  900582 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:16:58.612427  900582 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:58.612523  900582 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.612718  900582 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:58.612833  900582 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:58.612970  900582 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:58.615687  900582 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:16:58.616042  900582 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:58.616224  900582 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:58.616356  900582 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:16:58.616484  900582 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.616618  900582 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:58.616768  900582 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:58.616992  900582 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:58.847809  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.869760  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1026 15:16:58.870219  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:58.873863  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:58.877647  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:58.879176  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:58.915281  900582 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1026 15:16:58.915324  900582 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.915372  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:58.915779  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:58.962164  900582 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1026 15:16:58.962208  900582 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1026 15:16:58.962257  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.031069  900582 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1026 15:16:59.031163  900582 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.031247  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.031407  900582 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1026 15:16:59.031462  900582 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.031506  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.031623  900582 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1026 15:16:59.031661  900582 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.031718  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.040691  900582 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1026 15:16:59.040808  900582 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.040857  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.051010  900582 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1026 15:16:59.051051  900582 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.051128  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.051185  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:59.051259  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:16:59.051332  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.051333  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.051383  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.051430  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.225768  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.225899  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.225989  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:59.226079  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:16:59.226170  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.226264  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.226348  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.380539  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.380665  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.380721  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.380804  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:59.380844  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:16:59.380881  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.380902  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.508353  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:16:59.508462  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:16:59.508531  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1026 15:16:59.508587  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1026 15:16:59.508633  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:16:59.508681  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:16:59.508778  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:16:59.508834  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:16:59.508898  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.508953  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:16:59.509015  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:16:59.509075  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:16:59.509121  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:16:59.547048  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1026 15:16:59.547088  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1026 15:16:59.547163  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1026 15:16:59.547180  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1026 15:16:59.547239  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1026 15:16:59.547256  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1026 15:16:59.547317  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1026 15:16:59.547335  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1026 15:16:59.547389  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1026 15:16:59.547414  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1026 15:16:59.547482  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:16:59.547580  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:16:59.547628  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1026 15:16:59.547645  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1026 15:16:59.605215  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1026 15:16:59.605294  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1026 15:16:59.646204  900582 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1026 15:16:59.646840  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1026 15:16:59.995124  900582 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1026 15:16:59.995375  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:00.488634  900582 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1026 15:17:00.488771  900582 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:00.488886  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:17:00.488946  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1026 15:17:00.553360  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:17:00.553613  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:17:00.610727  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1026 15:17:00.753392  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:03.202185  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:03.297597  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.743932043s)
	I1026 15:17:03.297632  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1026 15:17:03.297653  900582 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:17:03.297710  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:17:03.297780  900582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.686955396s)
	I1026 15:17:03.297822  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:05.933063  900582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.635211188s)
	I1026 15:17:05.933155  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:05.933292  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.635566754s)
	I1026 15:17:05.933309  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1026 15:17:05.933329  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:17:05.933363  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1026 15:17:05.203565  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:07.702117  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:07.590231  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.656842658s)
	I1026 15:17:07.590261  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1026 15:17:07.590281  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:17:07.590332  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:17:07.590410  900582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.657239463s)
	I1026 15:17:07.590438  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1026 15:17:07.590506  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:17:09.738674  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.148296569s)
	I1026 15:17:09.738700  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1026 15:17:09.738720  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:17:09.738780  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:17:09.738863  900582 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.148340434s)
	I1026 15:17:09.738879  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1026 15:17:09.738894  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1026 15:17:11.763008  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.024205865s)
	I1026 15:17:11.763032  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1026 15:17:11.763054  900582 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:17:11.763106  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1026 15:17:09.703184  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:12.199799  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:16.196818  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.433691989s)
	I1026 15:17:16.196842  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1026 15:17:16.196859  900582 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:17:16.196908  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:17:16.795928  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1026 15:17:16.795972  900582 cache_images.go:124] Successfully loaded all cached images
	I1026 15:17:16.795978  900582 cache_images.go:93] duration metric: took 18.184513306s to LoadCachedImages
	I1026 15:17:16.795988  900582 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:17:16.796076  900582 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-954807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:17:16.796165  900582 ssh_runner.go:195] Run: crio config
	I1026 15:17:16.850783  900582 cni.go:84] Creating CNI manager for ""
	I1026 15:17:16.850807  900582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:17:16.850827  900582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:17:16.850850  900582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-954807 NodeName:no-preload-954807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:17:16.850975  900582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-954807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:17:16.851056  900582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:17:16.859315  900582 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1026 15:17:16.859383  900582 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1026 15:17:16.867384  900582 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1026 15:17:16.867488  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1026 15:17:16.868233  900582 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubeadm
	I1026 15:17:16.868829  900582 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubelet
	I1026 15:17:16.872299  900582 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1026 15:17:16.872336  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	W1026 15:17:14.200034  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:16.200500  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:18.204577  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:17.605716  900582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:17:17.624012  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1026 15:17:17.630599  900582 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1026 15:17:17.630648  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1026 15:17:17.787599  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1026 15:17:17.797221  900582 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1026 15:17:17.797331  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1026 15:17:18.419024  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:17:18.429535  900582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:17:18.446625  900582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:17:18.463030  900582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:17:18.480843  900582 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:17:18.484916  900582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:17:18.497841  900582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:17:18.639807  900582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:17:18.658934  900582 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807 for IP: 192.168.85.2
	I1026 15:17:18.658952  900582 certs.go:195] generating shared ca certs ...
	I1026 15:17:18.658967  900582 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.659117  900582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:17:18.659159  900582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:17:18.659166  900582 certs.go:257] generating profile certs ...
	I1026 15:17:18.659220  900582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key
	I1026 15:17:18.659231  900582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt with IP's: []
	I1026 15:17:18.787373  900582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt ...
	I1026 15:17:18.787408  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: {Name:mk0b38f2ef642839cf190c25059aef2af5815488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.787616  900582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key ...
	I1026 15:17:18.787630  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key: {Name:mkb481055104e6ab4a7fbf16d12dbffb1867f6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.787725  900582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805
	I1026 15:17:18.787741  900582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 15:17:18.877353  900582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805 ...
	I1026 15:17:18.877382  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805: {Name:mkbba21210481a13581e84a46724d1c441fc5aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.877571  900582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805 ...
	I1026 15:17:18.877587  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805: {Name:mkee85646333967e3b12a6a150c6c8c1ddb64068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.877675  900582 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805 -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt
	I1026 15:17:18.877752  900582 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805 -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key
	I1026 15:17:18.877813  900582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key
	I1026 15:17:18.877833  900582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt with IP's: []
	I1026 15:17:19.762654  900582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt ...
	I1026 15:17:19.762686  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt: {Name:mk940ee61c8e99ed61b9fa1cf3c2c6fbccc50b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:19.762890  900582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key ...
	I1026 15:17:19.762908  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key: {Name:mkde321844ba0beb806299e05ddc39a807993c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:19.763105  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:17:19.763154  900582 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:17:19.763168  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:17:19.763195  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:17:19.763222  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:17:19.763284  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:17:19.763336  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:17:19.763902  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:17:19.783710  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:17:19.803891  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:17:19.830542  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:17:19.848579  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:17:19.866933  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:17:19.886877  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:17:19.904927  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:17:19.922935  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:17:19.942261  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:17:19.963432  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:17:19.981430  900582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:17:19.995431  900582 ssh_runner.go:195] Run: openssl version
	I1026 15:17:20.005405  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:17:20.021179  900582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:17:20.026815  900582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:17:20.026944  900582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:17:20.068945  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:17:20.078224  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:17:20.087003  900582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:17:20.091268  900582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:17:20.091391  900582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:17:20.135665  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:17:20.144298  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:17:20.154868  900582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:17:20.159150  900582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:17:20.159227  900582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:17:20.201721  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:17:20.210788  900582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:17:20.214548  900582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:17:20.214656  900582 kubeadm.go:400] StartCluster: {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:17:20.214746  900582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:17:20.214824  900582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:17:20.253168  900582 cri.go:89] found id: ""
	I1026 15:17:20.253321  900582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:17:20.263879  900582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:17:20.272470  900582 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:17:20.272588  900582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:17:20.290161  900582 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:17:20.290234  900582 kubeadm.go:157] found existing configuration files:
	
	I1026 15:17:20.290321  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:17:20.300167  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:17:20.300286  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:17:20.308139  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:17:20.317146  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:17:20.317232  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:17:20.324629  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:17:20.332511  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:17:20.332600  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:17:20.340417  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:17:20.348419  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:17:20.348500  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:17:20.356538  900582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:17:20.428511  900582 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:17:20.428831  900582 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:17:20.502987  900582 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 15:17:20.701598  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:23.202017  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:25.704334  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:28.202734  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:30.701412  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:33.199901  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:37.351970  900582 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:17:37.352038  900582 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:17:37.352134  900582 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:17:37.352197  900582 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 15:17:37.352237  900582 kubeadm.go:318] OS: Linux
	I1026 15:17:37.352288  900582 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:17:37.352341  900582 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 15:17:37.352394  900582 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:17:37.352447  900582 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:17:37.352502  900582 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:17:37.352557  900582 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:17:37.352607  900582 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:17:37.352662  900582 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:17:37.352747  900582 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 15:17:37.352833  900582 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:17:37.352935  900582 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:17:37.353032  900582 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:17:37.353101  900582 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:17:37.357115  900582 out.go:252]   - Generating certificates and keys ...
	I1026 15:17:37.357213  900582 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:17:37.357286  900582 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:17:37.357364  900582 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:17:37.357427  900582 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:17:37.357493  900582 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:17:37.357549  900582 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:17:37.357609  900582 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:17:37.357740  900582 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-954807] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:17:37.357799  900582 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:17:37.357926  900582 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-954807] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:17:37.357997  900582 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:17:37.358072  900582 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:17:37.358123  900582 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:17:37.358185  900582 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:17:37.358242  900582 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:17:37.358306  900582 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:17:37.358365  900582 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:17:37.358435  900582 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:17:37.358496  900582 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:17:37.358584  900582 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:17:37.358656  900582 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:17:37.361556  900582 out.go:252]   - Booting up control plane ...
	I1026 15:17:37.361674  900582 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:17:37.361774  900582 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:17:37.361849  900582 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:17:37.362008  900582 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:17:37.362123  900582 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:17:37.362242  900582 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:17:37.362337  900582 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:17:37.362398  900582 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:17:37.362567  900582 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:17:37.362715  900582 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:17:37.362797  900582 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001349413s
	I1026 15:17:37.362910  900582 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:17:37.363005  900582 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 15:17:37.363113  900582 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:17:37.363234  900582 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:17:37.363348  900582 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.070299815s
	I1026 15:17:37.363461  900582 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.410038844s
	I1026 15:17:37.363549  900582 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001520033s
	I1026 15:17:37.363665  900582 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:17:37.363799  900582 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:17:37.363873  900582 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:17:37.364070  900582 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-954807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:17:37.364132  900582 kubeadm.go:318] [bootstrap-token] Using token: 7jyxgn.utj0vxklu33lbfpx
	I1026 15:17:37.367300  900582 out.go:252]   - Configuring RBAC rules ...
	I1026 15:17:37.367434  900582 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:17:37.367535  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:17:37.367709  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:17:37.367873  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:17:37.368036  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:17:37.368134  900582 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:17:37.368266  900582 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:17:37.368333  900582 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:17:37.368386  900582 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:17:37.368391  900582 kubeadm.go:318] 
	I1026 15:17:37.368454  900582 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:17:37.368459  900582 kubeadm.go:318] 
	I1026 15:17:37.368539  900582 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:17:37.368544  900582 kubeadm.go:318] 
	I1026 15:17:37.368577  900582 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:17:37.368640  900582 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:17:37.368829  900582 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:17:37.368837  900582 kubeadm.go:318] 
	I1026 15:17:37.368895  900582 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:17:37.368906  900582 kubeadm.go:318] 
	I1026 15:17:37.368956  900582 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:17:37.368964  900582 kubeadm.go:318] 
	I1026 15:17:37.369019  900582 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:17:37.369113  900582 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:17:37.369189  900582 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:17:37.369200  900582 kubeadm.go:318] 
	I1026 15:17:37.369290  900582 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:17:37.369374  900582 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:17:37.369382  900582 kubeadm.go:318] 
	I1026 15:17:37.369471  900582 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7jyxgn.utj0vxklu33lbfpx \
	I1026 15:17:37.369582  900582 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:17:37.369606  900582 kubeadm.go:318] 	--control-plane 
	I1026 15:17:37.369613  900582 kubeadm.go:318] 
	I1026 15:17:37.369702  900582 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:17:37.369709  900582 kubeadm.go:318] 
	I1026 15:17:37.369794  900582 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7jyxgn.utj0vxklu33lbfpx \
	I1026 15:17:37.369919  900582 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:17:37.369931  900582 cni.go:84] Creating CNI manager for ""
	I1026 15:17:37.369939  900582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:17:37.373097  900582 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:17:37.375959  900582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:17:37.383151  900582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:17:37.383173  900582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:17:37.405640  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1026 15:17:35.200536  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:37.200650  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:38.699978  898916 pod_ready.go:94] pod "coredns-66bc5c9577-rkx49" is "Ready"
	I1026 15:17:38.700010  898916 pod_ready.go:86] duration metric: took 40.005153866s for pod "coredns-66bc5c9577-rkx49" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.702861  898916 pod_ready.go:83] waiting for pod "etcd-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.707578  898916 pod_ready.go:94] pod "etcd-embed-certs-018497" is "Ready"
	I1026 15:17:38.707607  898916 pod_ready.go:86] duration metric: took 4.719265ms for pod "etcd-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.709958  898916 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.714982  898916 pod_ready.go:94] pod "kube-apiserver-embed-certs-018497" is "Ready"
	I1026 15:17:38.715013  898916 pod_ready.go:86] duration metric: took 5.026041ms for pod "kube-apiserver-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.717464  898916 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.898356  898916 pod_ready.go:94] pod "kube-controller-manager-embed-certs-018497" is "Ready"
	I1026 15:17:38.898428  898916 pod_ready.go:86] duration metric: took 180.93365ms for pod "kube-controller-manager-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:39.098895  898916 pod_ready.go:83] waiting for pod "kube-proxy-n7rjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:39.497889  898916 pod_ready.go:94] pod "kube-proxy-n7rjg" is "Ready"
	I1026 15:17:39.497920  898916 pod_ready.go:86] duration metric: took 398.998516ms for pod "kube-proxy-n7rjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:39.698043  898916 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:40.098182  898916 pod_ready.go:94] pod "kube-scheduler-embed-certs-018497" is "Ready"
	I1026 15:17:40.098214  898916 pod_ready.go:86] duration metric: took 400.140485ms for pod "kube-scheduler-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:40.098227  898916 pod_ready.go:40] duration metric: took 41.464250896s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:17:40.166688  898916 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:17:40.169970  898916 out.go:179] * Done! kubectl is now configured to use "embed-certs-018497" cluster and "default" namespace by default
	I1026 15:17:37.731385  900582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:17:37.731547  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-954807 minikube.k8s.io/updated_at=2025_10_26T15_17_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=no-preload-954807 minikube.k8s.io/primary=true
	I1026 15:17:37.731550  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:37.754725  900582 ops.go:34] apiserver oom_adj: -16
	I1026 15:17:37.874891  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:38.375625  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:38.875641  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:39.374971  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:39.875871  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:40.375920  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:40.875047  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:41.375959  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:41.875504  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:42.374997  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:42.570941  900582 kubeadm.go:1113] duration metric: took 4.839466199s to wait for elevateKubeSystemPrivileges
	I1026 15:17:42.570967  900582 kubeadm.go:402] duration metric: took 22.356317103s to StartCluster
	I1026 15:17:42.570984  900582 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:42.571057  900582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:17:42.573129  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:42.573521  900582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:17:42.573957  900582 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:42.574065  900582 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:17:42.574142  900582 addons.go:69] Setting storage-provisioner=true in profile "no-preload-954807"
	I1026 15:17:42.574156  900582 addons.go:238] Setting addon storage-provisioner=true in "no-preload-954807"
	I1026 15:17:42.574177  900582 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:17:42.574796  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:17:42.574981  900582 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:17:42.575544  900582 addons.go:69] Setting default-storageclass=true in profile "no-preload-954807"
	I1026 15:17:42.575575  900582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-954807"
	I1026 15:17:42.575898  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:17:42.579179  900582 out.go:179] * Verifying Kubernetes components...
	I1026 15:17:42.582243  900582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:17:42.625626  900582 addons.go:238] Setting addon default-storageclass=true in "no-preload-954807"
	I1026 15:17:42.625674  900582 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:17:42.626137  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:17:42.626332  900582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:42.629931  900582 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:17:42.629956  900582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:17:42.630021  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:17:42.660199  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:17:42.676836  900582 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:17:42.676859  900582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:17:42.676926  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:17:42.701968  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:17:42.914222  900582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:17:42.922664  900582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:17:43.000465  900582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:17:43.066049  900582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:17:43.843646  900582 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 15:17:43.846535  900582 node_ready.go:35] waiting up to 6m0s for node "no-preload-954807" to be "Ready" ...
	I1026 15:17:44.357935  900582 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-954807" context rescaled to 1 replicas
	I1026 15:17:44.375933  900582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309838056s)
	I1026 15:17:44.379662  900582 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 15:17:44.382673  900582 addons.go:514] duration metric: took 1.808586284s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1026 15:17:45.850300  900582 node_ready.go:57] node "no-preload-954807" has "Ready":"False" status (will retry)
	W1026 15:17:48.350068  900582 node_ready.go:57] node "no-preload-954807" has "Ready":"False" status (will retry)
	W1026 15:17:50.849595  900582 node_ready.go:57] node "no-preload-954807" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 15:17:24 embed-certs-018497 crio[653]: time="2025-10-26T15:17:24.454875358Z" level=info msg="Removed container e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4/dashboard-metrics-scraper" id=f633f597-755a-47c1-b2ab-2bf22b92600d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:17:27 embed-certs-018497 conmon[1141]: conmon 2c5a5ec5efcaa7b4cb46 <ninfo>: container 1151 exited with status 1
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.440070865Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=510e32a0-7229-478f-bd13-b77991e03f73 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.443885135Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d7bed5a0-0f8b-4943-89b5-66b9ec157ce9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.44942781Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec801b90-23f8-42c9-b6b0-1e2d7b910641 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.449717979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.465089562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.466615832Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b64e7b5d3845c7b61c8dbabd1e610ed457588898891e98da3b3f13e5738de5e9/merged/etc/passwd: no such file or directory"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.466774906Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b64e7b5d3845c7b61c8dbabd1e610ed457588898891e98da3b3f13e5738de5e9/merged/etc/group: no such file or directory"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.467135977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.48644394Z" level=info msg="Created container fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109: kube-system/storage-provisioner/storage-provisioner" id=ec801b90-23f8-42c9-b6b0-1e2d7b910641 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.490457785Z" level=info msg="Starting container: fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109" id=421a93fc-09b6-4872-ad42-2ae72d4cd389 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.501012445Z" level=info msg="Started container" PID=1651 containerID=fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109 description=kube-system/storage-provisioner/storage-provisioner id=421a93fc-09b6-4872-ad42-2ae72d4cd389 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9da72ea92b352b5fc9be1a5d901935711b56424d52758a82a4c06cc753e65c88
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.16544206Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.173772416Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.173958822Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.174047291Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.17775038Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.177907116Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.17798971Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.18335365Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.183522112Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.183595491Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.187643002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.187798212Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fc411acb1c8fd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   9da72ea92b352       storage-provisioner                          kube-system
	ce2dbffab4910       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   d5aafdbc03343       dashboard-metrics-scraper-6ffb444bf9-m58x4   kubernetes-dashboard
	65acd0d0bd415       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   561c23effa4ff       kubernetes-dashboard-855c9754f9-85vnc        kubernetes-dashboard
	e43d91bb5e3e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   69ce496f9d9e2       coredns-66bc5c9577-rkx49                     kube-system
	8cffe56f508af       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   386d85863d10d       busybox                                      default
	2c5a5ec5efcaa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   9da72ea92b352       storage-provisioner                          kube-system
	03db0d606c127       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   3fb084e4674be       kube-proxy-n7rjg                             kube-system
	a544d2cd71d6e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   f7d8425f9507f       kindnet-gxpz7                                kube-system
	090aba612ed4b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5c9de3eec7252       kube-scheduler-embed-certs-018497            kube-system
	409f07111dd90       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   121bb82f81152       kube-apiserver-embed-certs-018497            kube-system
	3bd8efc1a4f43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   c4a42bc4acf8e       kube-controller-manager-embed-certs-018497   kube-system
	d9c73ce88247b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d0b1b63d1d5b3       etcd-embed-certs-018497                      kube-system
	
	
	==> coredns [e43d91bb5e3e6317a58891cd2e1ffa985b52cdbecb3fc66c4cb6d88beed6bb9a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43109 - 59388 "HINFO IN 6632748502928444588.6955070418265700300. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014342739s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-018497
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-018497
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-018497
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_15_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:15:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-018497
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:17:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-018497
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                072f2fa1-40d7-443d-9b77-e971842fc752
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-rkx49                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-embed-certs-018497                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-gxpz7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-018497             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-embed-certs-018497    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-n7rjg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-018497             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m58x4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-85vnc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m24s              kube-proxy       
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m32s              kubelet          Node embed-certs-018497 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m32s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s              kubelet          Node embed-certs-018497 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s              kubelet          Node embed-certs-018497 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m32s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m28s              node-controller  Node embed-certs-018497 event: Registered Node embed-certs-018497 in Controller
	  Normal   NodeReady                105s               kubelet          Node embed-certs-018497 status is now: NodeReady
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node embed-certs-018497 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node embed-certs-018497 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node embed-certs-018497 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-018497 event: Registered Node embed-certs-018497 in Controller
	
	
	==> dmesg <==
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540] <==
	{"level":"warn","ts":"2025-10-26T15:16:53.779787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.792668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.816754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.838788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.855289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.873308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.890824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.902851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.919264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.941133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.957573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.971169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.993241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.017656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.037652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.061078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.073379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.093451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.108769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.137000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.147061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.184928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.204073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.282133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.340971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34874","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:17:55 up  5:00,  0 user,  load average: 4.53, 3.62, 3.05
	Linux embed-certs-018497 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a544d2cd71d6e7dbf96a6029fcb84048899600d50410fd953e7e9825ae6d54e4] <==
	I1026 15:16:56.945972       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:16:56.946184       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:16:56.946315       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:16:56.946328       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:16:56.946341       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:16:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:16:57.161206       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:16:57.161300       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:16:57.161357       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:16:57.162206       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:17:27.162458       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:17:27.162565       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:17:27.162648       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:17:27.162724       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:17:28.661658       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:17:28.661774       1 metrics.go:72] Registering metrics
	I1026 15:17:28.661873       1 controller.go:711] "Syncing nftables rules"
	I1026 15:17:37.164827       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:17:37.164873       1 main.go:301] handling current node
	I1026 15:17:47.167831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:17:47.167866       1 main.go:301] handling current node
	
	
	==> kube-apiserver [409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45] <==
	I1026 15:16:55.982319       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:16:55.982561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:16:55.982629       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:16:55.982637       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:16:55.982991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:16:55.985542       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:16:55.985555       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:16:55.985560       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:16:55.985566       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:16:55.997594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:16:56.022901       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:16:56.022955       1 policy_source.go:240] refreshing policies
	I1026 15:16:56.028643       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1026 15:16:56.099297       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:16:56.158799       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:16:56.463750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:16:58.012168       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:16:58.175487       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:16:58.319245       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:16:58.407570       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:16:58.546910       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.38.171"}
	I1026 15:16:58.570125       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.237.136"}
	I1026 15:17:00.751163       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:17:00.784230       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:17:01.035693       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3] <==
	I1026 15:17:00.665230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:17:00.669583       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:17:00.670965       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:17:00.673181       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:17:00.673535       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:17:00.673932       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:17:00.673965       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:17:00.680271       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:17:00.680618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:17:00.684956       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:17:00.686684       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:17:00.704093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:17:00.709767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:17:00.720092       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:17:00.720232       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:17:00.720746       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:17:00.720809       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:17:00.721500       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-018497"
	I1026 15:17:00.721605       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:17:00.726044       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:17:00.726102       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:17:00.741991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:17:00.800595       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:17:00.800642       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:17:00.800651       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [03db0d606c127fce8efea05cc20d5e89e56ed82af785cf24f1a16c72af21e767] <==
	I1026 15:16:57.918562       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:16:58.283975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:16:58.392772       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:16:58.392909       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:16:58.400896       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:16:58.628331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:16:58.628471       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:16:58.634688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:16:58.635064       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:16:58.636727       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:16:58.638561       1 config.go:200] "Starting service config controller"
	I1026 15:16:58.638646       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:16:58.638691       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:16:58.638718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:16:58.638754       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:16:58.638781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:16:58.639495       1 config.go:309] "Starting node config controller"
	I1026 15:16:58.641437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:16:58.641467       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:16:58.739546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:16:58.739549       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:16:58.739566       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef] <==
	I1026 15:16:53.633638       1 serving.go:386] Generated self-signed cert in-memory
	I1026 15:16:57.672079       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:16:57.672178       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:16:57.697401       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:16:57.697545       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 15:16:57.697570       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 15:16:57.697613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:16:57.728090       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:16:57.742997       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:16:57.742636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:16:57.743045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:16:57.851619       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:16:57.851680       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:16:57.897647       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.373072     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8ff\" (UniqueName: \"kubernetes.io/projected/f5ef036c-7b62-4cca-a13d-a421490f29ac-kube-api-access-dg8ff\") pod \"dashboard-metrics-scraper-6ffb444bf9-m58x4\" (UID: \"f5ef036c-7b62-4cca-a13d-a421490f29ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4"
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.373741     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2a3da6ff-3ac6-4c07-bf84-71014b0de0c8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-85vnc\" (UID: \"2a3da6ff-3ac6-4c07-bf84-71014b0de0c8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-85vnc"
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.373924     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpk2\" (UniqueName: \"kubernetes.io/projected/2a3da6ff-3ac6-4c07-bf84-71014b0de0c8-kube-api-access-6xpk2\") pod \"kubernetes-dashboard-855c9754f9-85vnc\" (UID: \"2a3da6ff-3ac6-4c07-bf84-71014b0de0c8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-85vnc"
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.374056     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5ef036c-7b62-4cca-a13d-a421490f29ac-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-m58x4\" (UID: \"f5ef036c-7b62-4cca-a13d-a421490f29ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4"
	Oct 26 15:17:08 embed-certs-018497 kubelet[779]: I1026 15:17:08.377225     779 scope.go:117] "RemoveContainer" containerID="815e3e02c237486d3f53689ba03841be3cb7a070dbb17980dacf98e511f267d1"
	Oct 26 15:17:09 embed-certs-018497 kubelet[779]: I1026 15:17:09.387911     779 scope.go:117] "RemoveContainer" containerID="815e3e02c237486d3f53689ba03841be3cb7a070dbb17980dacf98e511f267d1"
	Oct 26 15:17:09 embed-certs-018497 kubelet[779]: I1026 15:17:09.388315     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:09 embed-certs-018497 kubelet[779]: E1026 15:17:09.390987     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:10 embed-certs-018497 kubelet[779]: I1026 15:17:10.391914     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:10 embed-certs-018497 kubelet[779]: E1026 15:17:10.392091     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:11 embed-certs-018497 kubelet[779]: I1026 15:17:11.547317     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:11 embed-certs-018497 kubelet[779]: E1026 15:17:11.547496     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:23 embed-certs-018497 kubelet[779]: I1026 15:17:23.962762     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: I1026 15:17:24.429142     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: I1026 15:17:24.429785     779 scope.go:117] "RemoveContainer" containerID="ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: E1026 15:17:24.430058     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: I1026 15:17:24.461547     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-85vnc" podStartSLOduration=10.316163864 podStartE2EDuration="23.461529159s" podCreationTimestamp="2025-10-26 15:17:01 +0000 UTC" firstStartedPulling="2025-10-26 15:17:01.664044877 +0000 UTC m=+14.870179454" lastFinishedPulling="2025-10-26 15:17:14.80941019 +0000 UTC m=+28.015544749" observedRunningTime="2025-10-26 15:17:15.433654574 +0000 UTC m=+28.639789142" watchObservedRunningTime="2025-10-26 15:17:24.461529159 +0000 UTC m=+37.667663719"
	Oct 26 15:17:27 embed-certs-018497 kubelet[779]: I1026 15:17:27.438771     779 scope.go:117] "RemoveContainer" containerID="2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1"
	Oct 26 15:17:31 embed-certs-018497 kubelet[779]: I1026 15:17:31.547045     779 scope.go:117] "RemoveContainer" containerID="ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	Oct 26 15:17:31 embed-certs-018497 kubelet[779]: E1026 15:17:31.547242     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:42 embed-certs-018497 kubelet[779]: I1026 15:17:42.963398     779 scope.go:117] "RemoveContainer" containerID="ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	Oct 26 15:17:42 embed-certs-018497 kubelet[779]: E1026 15:17:42.963620     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:52 embed-certs-018497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:17:52 embed-certs-018497 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:17:52 embed-certs-018497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [65acd0d0bd4152422d5b3b852f04705e7b5bc36efce35381af401cfd45e8efe0] <==
	2025/10/26 15:17:14 Using namespace: kubernetes-dashboard
	2025/10/26 15:17:14 Using in-cluster config to connect to apiserver
	2025/10/26 15:17:14 Using secret token for csrf signing
	2025/10/26 15:17:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:17:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:17:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:17:14 Generating JWE encryption key
	2025/10/26 15:17:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:17:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:17:15 Initializing JWE encryption key from synchronized object
	2025/10/26 15:17:15 Creating in-cluster Sidecar client
	2025/10/26 15:17:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:17:15 Serving insecurely on HTTP port: 9090
	2025/10/26 15:17:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:17:14 Starting overwatch
	
	
	==> storage-provisioner [2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1] <==
	I1026 15:16:57.175332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:17:27.180899       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109] <==
	I1026 15:17:27.531242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:17:27.532203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:17:27.534824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:30.990981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:35.251804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:38.850629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:41.903754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:44.927149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:44.932951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:17:44.933230       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:17:44.933508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-018497_8b7ba958-996a-47c0-891b-14dd7e17eca4!
	I1026 15:17:44.934266       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16612a96-da08-4714-84ae-ba8e387bd6f2", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-018497_8b7ba958-996a-47c0-891b-14dd7e17eca4 became leader
	W1026 15:17:44.939176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:44.968410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:17:45.033888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-018497_8b7ba958-996a-47c0-891b-14dd7e17eca4!
	W1026 15:17:46.972390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:46.977926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:48.981076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:48.985852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:50.988483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:50.995095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:52.998765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:53.012925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:55.017678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:55.025606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-018497 -n embed-certs-018497
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-018497 -n embed-certs-018497: exit status 2 (385.716323ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-018497 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-018497
helpers_test.go:243: (dbg) docker inspect embed-certs-018497:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad",
	        "Created": "2025-10-26T15:15:02.876896856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 899040,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:16:39.200569583Z",
	            "FinishedAt": "2025-10-26T15:16:37.952780193Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/hosts",
	        "LogPath": "/var/lib/docker/containers/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad/bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad-json.log",
	        "Name": "/embed-certs-018497",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-018497:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-018497",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf916fec8d462b45c6a6e6809853f95028cad544cfc79b88bdcce338b44966ad",
	                "LowerDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2e13d3220e33af97475356d7be4dbbac0d16f6e2a572870f7342c6218d95ce2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-018497",
	                "Source": "/var/lib/docker/volumes/embed-certs-018497/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-018497",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-018497",
	                "name.minikube.sigs.k8s.io": "embed-certs-018497",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6047482d4608b7dffb7e2120b773ca111b0ce8fd15af0214cbd6beae3491a7ba",
	            "SandboxKey": "/var/run/docker/netns/6047482d4608",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-018497": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:75:d9:12:8b:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d6626fff9fc6f2eadb00ab3ddc73eb8fae0b42c47b2901a5327d56ab6e3bb96",
	                    "EndpointID": "f2dd66347c0a87661f3d46251b3f5cfe8a03c726400d0dc1200eaae1d63da4aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-018497",
	                        "bf916fec8d46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497: exit status 2 (359.235103ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-018497 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-018497 logs -n 25: (1.601437244s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-env-969063                                                                                                                                                                                                                   │ force-systemd-env-969063     │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ cert-options-209492 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ ssh     │ -p cert-options-209492 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p old-k8s-version-304880 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:14 UTC │
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                                                                                               │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                                                                                               │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                                                                                                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:16:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:16:47.438975  900582 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:16:47.439214  900582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:16:47.439243  900582 out.go:374] Setting ErrFile to fd 2...
	I1026 15:16:47.439261  900582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:16:47.439557  900582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:16:47.440034  900582 out.go:368] Setting JSON to false
	I1026 15:16:47.441672  900582 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17960,"bootTime":1761473848,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:16:47.441782  900582 start.go:141] virtualization:  
	I1026 15:16:47.448296  900582 out.go:179] * [no-preload-954807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:16:47.452145  900582 notify.go:220] Checking for updates...
	I1026 15:16:47.455463  900582 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:16:47.458881  900582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:16:47.462259  900582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:16:47.466254  900582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:16:47.469785  900582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:16:47.473245  900582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:16:47.477312  900582 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:16:47.477512  900582 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:16:47.525877  900582 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:16:47.526078  900582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:16:47.638929  900582 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:16:47.626362012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:16:47.639043  900582 docker.go:318] overlay module found
	I1026 15:16:47.643996  900582 out.go:179] * Using the docker driver based on user configuration
	I1026 15:16:47.647001  900582 start.go:305] selected driver: docker
	I1026 15:16:47.647027  900582 start.go:925] validating driver "docker" against <nil>
	I1026 15:16:47.647042  900582 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:16:47.653368  900582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:16:47.760227  900582 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:16:47.74923191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:16:47.760401  900582 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:16:47.760646  900582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:16:47.764088  900582 out.go:179] * Using Docker driver with root privileges
	I1026 15:16:47.767196  900582 cni.go:84] Creating CNI manager for ""
	I1026 15:16:47.767281  900582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:16:47.767299  900582 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:16:47.767384  900582 start.go:349] cluster config:
	{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:16:47.771258  900582 out.go:179] * Starting "no-preload-954807" primary control-plane node in "no-preload-954807" cluster
	I1026 15:16:47.774198  900582 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:16:47.777287  900582 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:16:47.780255  900582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:16:47.780408  900582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:16:47.780449  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json: {Name:mk898ca9db1ad5155ef5b61b472cca12dffb31bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:47.780638  900582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:16:47.783396  900582 cache.go:107] acquiring lock: {Name:mkbe2086c35e9fcbe8c03bdef4b41f05ca228154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.783536  900582 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:16:47.783552  900582 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.95011ms
	I1026 15:16:47.783614  900582 cache.go:107] acquiring lock: {Name:mk2325fad129f4b7d5aa09cccfdaa3da809a73fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.783858  900582 cache.go:107] acquiring lock: {Name:mk54c57481d4cb891842b1b352451c8a69a47281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.784147  900582 cache.go:107] acquiring lock: {Name:mk5a8cbd33cc84011ebd29296028bb78893eefc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.784260  900582 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:16:47.784296  900582 cache.go:107] acquiring lock: {Name:mkef4d9c96ab97f5a848fa8d925b343812fa37ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.784900  900582 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:47.785795  900582 cache.go:107] acquiring lock: {Name:mkaf3dfd27f1d15aad668c191c7cc85c71d2c9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.785892  900582 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:47.786066  900582 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:47.786192  900582 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:47.786635  900582 cache.go:107] acquiring lock: {Name:mk964a36cda2ac1ad4a9006d14be02c6bd71c41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.786725  900582 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:47.784963  900582 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:16:47.787087  900582 cache.go:107] acquiring lock: {Name:mkc8d2557eb259bb5390e2f2db4396a6aec79411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.787179  900582 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:47.787784  900582 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:47.787866  900582 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:47.789123  900582 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:47.790011  900582 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:47.790087  900582 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:47.790238  900582 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:16:47.790343  900582 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:47.815259  900582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:16:47.815286  900582 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:16:47.815300  900582 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:16:47.815323  900582 start.go:360] acquireMachinesLock for no-preload-954807: {Name:mk3de11c10d64abd2c458c411445bde4bf32881c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:16:47.815442  900582 start.go:364] duration metric: took 98.972µs to acquireMachinesLock for "no-preload-954807"
	I1026 15:16:47.815475  900582 start.go:93] Provisioning new machine with config: &{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:16:47.815553  900582 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:16:46.334439  898916 cli_runner.go:164] Run: docker network inspect embed-certs-018497 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:16:46.364312  898916 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:16:46.368642  898916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:16:46.380513  898916 kubeadm.go:883] updating cluster {Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:16:46.380625  898916 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:16:46.380676  898916 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:16:46.427949  898916 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:16:46.428023  898916 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:16:46.428104  898916 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:16:46.466590  898916 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:16:46.466616  898916 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:16:46.466623  898916 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:16:46.466727  898916 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-018497 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:16:46.466810  898916 ssh_runner.go:195] Run: crio config
	I1026 15:16:46.553374  898916 cni.go:84] Creating CNI manager for ""
	I1026 15:16:46.553402  898916 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:16:46.553453  898916 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:16:46.553490  898916 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-018497 NodeName:embed-certs-018497 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:16:46.553679  898916 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-018497"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:16:46.553785  898916 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:16:46.563684  898916 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:16:46.563813  898916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:16:46.571917  898916 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:16:46.585506  898916 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:16:46.599180  898916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:16:46.614916  898916 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:16:46.619143  898916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:16:46.629845  898916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:16:46.771997  898916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:16:46.791474  898916 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497 for IP: 192.168.76.2
	I1026 15:16:46.791499  898916 certs.go:195] generating shared ca certs ...
	I1026 15:16:46.791515  898916 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:46.791657  898916 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:16:46.791705  898916 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:16:46.791718  898916 certs.go:257] generating profile certs ...
	I1026 15:16:46.791803  898916 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/client.key
	I1026 15:16:46.791861  898916 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key.ac97108c
	I1026 15:16:46.791905  898916 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key
	I1026 15:16:46.792022  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:16:46.792054  898916 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:16:46.792065  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:16:46.792094  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:16:46.792119  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:16:46.792153  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:16:46.792199  898916 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:16:46.792824  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:16:46.868148  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:16:46.915278  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:16:46.980645  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:16:47.023403  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:16:47.047146  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:16:47.093608  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:16:47.114107  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/embed-certs-018497/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:16:47.138334  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:16:47.161291  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:16:47.179634  898916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:16:47.198075  898916 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:16:47.211369  898916 ssh_runner.go:195] Run: openssl version
	I1026 15:16:47.218536  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:16:47.228537  898916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:16:47.232816  898916 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:16:47.232881  898916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:16:47.278128  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:16:47.289882  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:16:47.301161  898916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:16:47.305608  898916 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:16:47.305675  898916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:16:47.349462  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:16:47.359981  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:16:47.369755  898916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:16:47.374995  898916 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:16:47.375062  898916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:16:47.426258  898916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:16:47.444239  898916 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:16:47.448442  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:16:47.556592  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:16:47.613479  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:16:47.717457  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:16:47.889157  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:16:47.969515  898916 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:16:48.126011  898916 kubeadm.go:400] StartCluster: {Name:embed-certs-018497 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-018497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:16:48.126103  898916 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:16:48.126176  898916 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:16:48.214574  898916 cri.go:89] found id: "090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef"
	I1026 15:16:48.214593  898916 cri.go:89] found id: "409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45"
	I1026 15:16:48.214597  898916 cri.go:89] found id: "3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3"
	I1026 15:16:48.214607  898916 cri.go:89] found id: "d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540"
	I1026 15:16:48.214612  898916 cri.go:89] found id: ""
	I1026 15:16:48.214669  898916 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:16:48.232829  898916 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:16:48Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:16:48.232918  898916 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:16:48.264848  898916 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:16:48.264866  898916 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:16:48.264925  898916 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:16:48.278302  898916 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:16:48.278711  898916 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-018497" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:16:48.278802  898916 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-018497" cluster setting kubeconfig missing "embed-certs-018497" context setting]
	I1026 15:16:48.279068  898916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:48.280384  898916 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:16:48.300997  898916 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:16:48.301092  898916 kubeadm.go:601] duration metric: took 36.219149ms to restartPrimaryControlPlane
	I1026 15:16:48.301126  898916 kubeadm.go:402] duration metric: took 175.100564ms to StartCluster
	I1026 15:16:48.301157  898916 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:48.301303  898916 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:16:48.302501  898916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:16:48.302696  898916 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:16:48.303862  898916 config.go:182] Loaded profile config "embed-certs-018497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:16:48.303974  898916 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:16:48.304179  898916 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-018497"
	I1026 15:16:48.304196  898916 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-018497"
	W1026 15:16:48.304203  898916 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:16:48.304228  898916 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:16:48.304672  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.309028  898916 addons.go:69] Setting dashboard=true in profile "embed-certs-018497"
	I1026 15:16:48.309126  898916 addons.go:238] Setting addon dashboard=true in "embed-certs-018497"
	W1026 15:16:48.309195  898916 addons.go:247] addon dashboard should already be in state true
	I1026 15:16:48.309520  898916 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:16:48.311505  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.316758  898916 addons.go:69] Setting default-storageclass=true in profile "embed-certs-018497"
	I1026 15:16:48.316787  898916 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-018497"
	I1026 15:16:48.317112  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.320784  898916 out.go:179] * Verifying Kubernetes components...
	I1026 15:16:48.326069  898916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:16:48.454353  898916 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:16:48.457799  898916 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:16:48.457818  898916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:16:48.457881  898916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:16:48.461402  898916 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:16:48.468831  898916 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:16:48.476362  898916 addons.go:238] Setting addon default-storageclass=true in "embed-certs-018497"
	W1026 15:16:48.476387  898916 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:16:48.476411  898916 host.go:66] Checking if "embed-certs-018497" exists ...
	I1026 15:16:48.476864  898916 cli_runner.go:164] Run: docker container inspect embed-certs-018497 --format={{.State.Status}}
	I1026 15:16:48.477058  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:16:48.477071  898916 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:16:48.477123  898916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:16:48.528200  898916 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:16:48.528226  898916 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:16:48.528290  898916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-018497
	I1026 15:16:48.631184  898916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:16:48.668943  898916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:16:48.669759  898916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/embed-certs-018497/id_rsa Username:docker}
	I1026 15:16:47.819158  900582 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:16:47.824939  900582 start.go:159] libmachine.API.Create for "no-preload-954807" (driver="docker")
	I1026 15:16:47.824995  900582 client.go:168] LocalClient.Create starting
	I1026 15:16:47.825075  900582 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:16:47.825121  900582 main.go:141] libmachine: Decoding PEM data...
	I1026 15:16:47.825139  900582 main.go:141] libmachine: Parsing certificate...
	I1026 15:16:47.825215  900582 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:16:47.835929  900582 main.go:141] libmachine: Decoding PEM data...
	I1026 15:16:47.835958  900582 main.go:141] libmachine: Parsing certificate...
	I1026 15:16:47.836386  900582 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:16:47.862641  900582 cli_runner.go:211] docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:16:47.862720  900582 network_create.go:284] running [docker network inspect no-preload-954807] to gather additional debugging logs...
	I1026 15:16:47.862740  900582 cli_runner.go:164] Run: docker network inspect no-preload-954807
	W1026 15:16:47.894970  900582 cli_runner.go:211] docker network inspect no-preload-954807 returned with exit code 1
	I1026 15:16:47.894998  900582 network_create.go:287] error running [docker network inspect no-preload-954807]: docker network inspect no-preload-954807: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-954807 not found
	I1026 15:16:47.895011  900582 network_create.go:289] output of [docker network inspect no-preload-954807]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-954807 not found
	
	** /stderr **
	I1026 15:16:47.895102  900582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:16:47.925274  900582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:16:47.925643  900582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:16:47.926051  900582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:16:47.926411  900582 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5d6626fff9fc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:f0:61:6a:ff:0a} reservation:<nil>}
	I1026 15:16:47.926904  900582 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d38a00}
	I1026 15:16:47.926930  900582 network_create.go:124] attempt to create docker network no-preload-954807 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 15:16:47.926987  900582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-954807 no-preload-954807
	I1026 15:16:48.076108  900582 network_create.go:108] docker network no-preload-954807 192.168.85.0/24 created
	I1026 15:16:48.076188  900582 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-954807" container
	I1026 15:16:48.076348  900582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:16:48.120590  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1026 15:16:48.121239  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:16:48.121719  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:16:48.123502  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:16:48.131215  900582 cli_runner.go:164] Run: docker volume create no-preload-954807 --label name.minikube.sigs.k8s.io=no-preload-954807 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:16:48.140271  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:16:48.145173  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:16:48.163743  900582 oci.go:103] Successfully created a docker volume no-preload-954807
	I1026 15:16:48.163829  900582 cli_runner.go:164] Run: docker run --rm --name no-preload-954807-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-954807 --entrypoint /usr/bin/test -v no-preload-954807:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:16:48.196307  900582 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:16:48.199570  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:16:48.199596  900582 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 415.303003ms
	I1026 15:16:48.199608  900582 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:16:48.711128  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:16:48.711220  900582 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 929.508467ms
	I1026 15:16:48.711250  900582 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:16:49.123056  900582 oci.go:107] Successfully prepared a docker volume no-preload-954807
	I1026 15:16:49.123090  900582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1026 15:16:49.152920  900582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 15:16:49.153132  900582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:16:49.425011  900582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-954807 --name no-preload-954807 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-954807 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-954807 --network no-preload-954807 --ip 192.168.85.2 --volume no-preload-954807:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:16:49.456843  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:16:49.456878  900582 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.671087912s
	I1026 15:16:49.456894  900582 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:16:49.488266  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:16:49.518613  900582 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.734763587s
	I1026 15:16:49.518678  900582 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:16:49.518591  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:16:49.518719  900582 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.731637054s
	I1026 15:16:49.518738  900582 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:16:49.573588  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:16:49.573673  900582 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.78952901s
	I1026 15:16:49.573701  900582 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:16:50.120939  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Running}}
	I1026 15:16:50.154064  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:16:50.220945  900582 cli_runner.go:164] Run: docker exec no-preload-954807 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:16:50.312841  900582 oci.go:144] the created container "no-preload-954807" has a running status.
	I1026 15:16:50.312918  900582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa...
	I1026 15:16:50.695688  900582 cache.go:157] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:16:50.695722  900582 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.909091282s
	I1026 15:16:50.695755  900582 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:16:50.695774  900582 cache.go:87] Successfully saved all images to host disk.
	I1026 15:16:50.978239  900582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:16:51.006655  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:16:51.028961  900582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:16:51.028982  900582 kic_runner.go:114] Args: [docker exec --privileged no-preload-954807 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:16:51.086440  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:16:51.114793  900582 machine.go:93] provisionDockerMachine start ...
	I1026 15:16:51.114908  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:51.141076  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:51.141420  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:51.141442  900582 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:16:51.144856  900582 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:16:48.922868  898916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:16:48.976087  898916 node_ready.go:35] waiting up to 6m0s for node "embed-certs-018497" to be "Ready" ...
	I1026 15:16:49.043978  898916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:16:49.185041  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:16:49.185067  898916 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:16:49.213004  898916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:16:49.276944  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:16:49.276993  898916 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:16:49.364769  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:16:49.364802  898916 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:16:49.580995  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:16:49.581015  898916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:16:49.620897  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:16:49.620921  898916 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:16:49.646596  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:16:49.646618  898916 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:16:49.681901  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:16:49.681922  898916 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:16:49.722722  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:16:49.722752  898916 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:16:49.767102  898916 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:16:49.767136  898916 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:16:49.786926  898916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:16:54.344348  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:16:54.344375  900582 ubuntu.go:182] provisioning hostname "no-preload-954807"
	I1026 15:16:54.344448  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:54.371481  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:54.371790  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:54.371807  900582 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-954807 && echo "no-preload-954807" | sudo tee /etc/hostname
	I1026 15:16:54.591514  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:16:54.591612  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:54.622465  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:54.622784  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:54.622807  900582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-954807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-954807/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-954807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:16:54.805286  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:16:54.805319  900582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:16:54.805349  900582 ubuntu.go:190] setting up certificates
	I1026 15:16:54.805359  900582 provision.go:84] configureAuth start
	I1026 15:16:54.805438  900582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:16:54.829832  900582 provision.go:143] copyHostCerts
	I1026 15:16:54.829898  900582 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:16:54.829908  900582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:16:54.829984  900582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:16:54.830108  900582 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:16:54.830112  900582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:16:54.830145  900582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:16:54.830205  900582 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:16:54.830209  900582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:16:54.830233  900582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:16:54.830294  900582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.no-preload-954807 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-954807]
	I1026 15:16:55.316992  900582 provision.go:177] copyRemoteCerts
	I1026 15:16:55.317062  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:16:55.317117  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:55.338274  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:55.450897  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:16:55.478426  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:16:55.503603  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:16:55.530864  900582 provision.go:87] duration metric: took 725.479989ms to configureAuth
	I1026 15:16:55.530933  900582 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:16:55.531157  900582 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:16:55.531308  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:55.554390  900582 main.go:141] libmachine: Using SSH client type: native
	I1026 15:16:55.554707  900582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I1026 15:16:55.554723  900582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:16:55.944242  900582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:16:55.944325  900582 machine.go:96] duration metric: took 4.829477893s to provisionDockerMachine
	I1026 15:16:55.944349  900582 client.go:171] duration metric: took 8.119346969s to LocalClient.Create
	I1026 15:16:55.944393  900582 start.go:167] duration metric: took 8.119459479s to libmachine.API.Create "no-preload-954807"
	I1026 15:16:55.944418  900582 start.go:293] postStartSetup for "no-preload-954807" (driver="docker")
	I1026 15:16:55.944440  900582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:16:55.944528  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:16:55.944589  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:55.973859  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.107348  900582 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:16:56.114250  900582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:16:56.114314  900582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:16:56.114325  900582 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:16:56.114386  900582 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:16:56.114465  900582 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:16:56.114566  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:16:56.127704  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:16:56.158761  900582 start.go:296] duration metric: took 214.315915ms for postStartSetup
	I1026 15:16:56.159197  900582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:16:56.186303  900582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:16:56.186579  900582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:16:56.186619  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:56.220832  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.326752  900582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:16:56.332334  900582 start.go:128] duration metric: took 8.516765399s to createHost
	I1026 15:16:56.332356  900582 start.go:83] releasing machines lock for "no-preload-954807", held for 8.516898414s
	I1026 15:16:56.332423  900582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:16:56.358383  900582 ssh_runner.go:195] Run: cat /version.json
	I1026 15:16:56.358429  900582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:16:56.358437  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:56.358498  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:16:56.393039  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.402093  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:16:56.509194  900582 ssh_runner.go:195] Run: systemctl --version
	I1026 15:16:56.648393  900582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:16:56.724425  900582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:16:56.732283  900582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:16:56.732355  900582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:16:56.786853  900582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 15:16:56.786879  900582 start.go:495] detecting cgroup driver to use...
	I1026 15:16:56.786912  900582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:16:56.786965  900582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:16:56.810474  900582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:16:56.830521  900582 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:16:56.830591  900582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:16:56.853525  900582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:16:56.887076  900582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:16:57.125818  900582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:16:57.352583  900582 docker.go:234] disabling docker service ...
	I1026 15:16:57.352739  900582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:16:57.396450  900582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:16:57.421065  900582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:16:57.647786  900582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:16:57.856648  900582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:16:57.875419  900582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:16:57.894352  900582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:16:57.894502  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.912105  900582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:16:57.912230  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.926801  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.937044  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.951196  900582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:16:57.961176  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:57.978450  900582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:58.000527  900582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:16:58.019486  900582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:16:58.031719  900582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:16:58.042704  900582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:16:58.233395  900582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:16:58.415605  900582 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:16:58.415684  900582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:16:58.420367  900582 start.go:563] Will wait 60s for crictl version
	I1026 15:16:58.420442  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:58.425279  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:16:58.453435  900582 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:16:58.453531  900582 ssh_runner.go:195] Run: crio --version
	I1026 15:16:58.495280  900582 ssh_runner.go:195] Run: crio --version
	I1026 15:16:58.540506  900582 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:16:55.718534  898916 node_ready.go:49] node "embed-certs-018497" is "Ready"
	I1026 15:16:55.718569  898916 node_ready.go:38] duration metric: took 6.742442993s for node "embed-certs-018497" to be "Ready" ...
	I1026 15:16:55.718584  898916 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:16:55.718642  898916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:16:58.384500  898916 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.340488423s)
	I1026 15:16:58.384558  898916 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.171534863s)
	I1026 15:16:58.578454  898916 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.791478642s)
	I1026 15:16:58.578598  898916 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.859942499s)
	I1026 15:16:58.578612  898916 api_server.go:72] duration metric: took 10.275894464s to wait for apiserver process to appear ...
	I1026 15:16:58.578619  898916 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:16:58.578637  898916 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:16:58.581612  898916 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-018497 addons enable metrics-server
	
	I1026 15:16:58.584212  898916 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1026 15:16:58.587126  898916 addons.go:514] duration metric: took 10.283146425s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 15:16:58.590025  898916 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:16:58.591547  898916 api_server.go:141] control plane version: v1.34.1
	I1026 15:16:58.591607  898916 api_server.go:131] duration metric: took 12.980902ms to wait for apiserver health ...
	I1026 15:16:58.591629  898916 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:16:58.595816  898916 system_pods.go:59] 8 kube-system pods found
	I1026 15:16:58.595850  898916 system_pods.go:61] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:16:58.595860  898916 system_pods.go:61] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:16:58.595867  898916 system_pods.go:61] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:58.595874  898916 system_pods.go:61] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:16:58.595880  898916 system_pods.go:61] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:16:58.595887  898916 system_pods.go:61] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:58.595894  898916 system_pods.go:61] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:16:58.595898  898916 system_pods.go:61] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Running
	I1026 15:16:58.595904  898916 system_pods.go:74] duration metric: took 4.257237ms to wait for pod list to return data ...
	I1026 15:16:58.595911  898916 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:16:58.598710  898916 default_sa.go:45] found service account: "default"
	I1026 15:16:58.598729  898916 default_sa.go:55] duration metric: took 2.811912ms for default service account to be created ...
	I1026 15:16:58.598738  898916 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:16:58.603993  898916 system_pods.go:86] 8 kube-system pods found
	I1026 15:16:58.604074  898916 system_pods.go:89] "coredns-66bc5c9577-rkx49" [7f47c66b-f9f5-4983-94d0-849c70d61ba4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:16:58.604097  898916 system_pods.go:89] "etcd-embed-certs-018497" [633cdc5b-0d5c-4171-9de3-5685936c2fb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:16:58.604136  898916 system_pods.go:89] "kindnet-gxpz7" [f3a7a936-8d0c-41e8-a4eb-f956f18abe3e] Running
	I1026 15:16:58.604164  898916 system_pods.go:89] "kube-apiserver-embed-certs-018497" [1c52b92a-1675-4f3b-861e-c22b4ad078fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:16:58.604186  898916 system_pods.go:89] "kube-controller-manager-embed-certs-018497" [2952af65-8177-4300-b6bc-a138bb999d23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:16:58.604241  898916 system_pods.go:89] "kube-proxy-n7rjg" [6f86e937-34ab-4404-821d-7034a88cf390] Running
	I1026 15:16:58.604266  898916 system_pods.go:89] "kube-scheduler-embed-certs-018497" [6e1d3a85-4441-4adf-9bc5-a462d709eeb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:16:58.604282  898916 system_pods.go:89] "storage-provisioner" [8bd8fd16-8a60-4e7c-bf17-b260091ded9d] Running
	I1026 15:16:58.604303  898916 system_pods.go:126] duration metric: took 5.558388ms to wait for k8s-apps to be running ...
	I1026 15:16:58.604323  898916 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:16:58.604405  898916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:16:58.625012  898916 system_svc.go:56] duration metric: took 20.665862ms WaitForService to wait for kubelet
	I1026 15:16:58.625080  898916 kubeadm.go:586] duration metric: took 10.322359134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:16:58.625144  898916 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:16:58.628397  898916 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:16:58.628472  898916 node_conditions.go:123] node cpu capacity is 2
	I1026 15:16:58.628498  898916 node_conditions.go:105] duration metric: took 3.339778ms to run NodePressure ...
	I1026 15:16:58.628523  898916 start.go:241] waiting for startup goroutines ...
	I1026 15:16:58.628556  898916 start.go:246] waiting for cluster config update ...
	I1026 15:16:58.628584  898916 start.go:255] writing updated cluster config ...
	I1026 15:16:58.628961  898916 ssh_runner.go:195] Run: rm -f paused
	I1026 15:16:58.633896  898916 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:16:58.694826  898916 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rkx49" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:16:58.543654  900582 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:16:58.560151  900582 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:16:58.564690  900582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:16:58.576572  900582 kubeadm.go:883] updating cluster {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:16:58.576679  900582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:16:58.576766  900582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:16:58.611425  900582 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:16:58.611453  900582 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 15:16:58.611575  900582 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:58.612124  900582 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:16:58.612326  900582 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:16:58.612427  900582 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:58.612523  900582 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.612718  900582 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:58.612833  900582 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:58.612970  900582 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:58.615687  900582 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:16:58.616042  900582 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:58.616224  900582 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:58.616356  900582 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:16:58.616484  900582 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.616618  900582 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:58.616768  900582 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:58.616992  900582 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:58.847809  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.869760  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1026 15:16:58.870219  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:58.873863  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:58.877647  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:58.879176  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:58.915281  900582 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1026 15:16:58.915324  900582 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:58.915372  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:58.915779  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:58.962164  900582 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1026 15:16:58.962208  900582 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1026 15:16:58.962257  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.031069  900582 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1026 15:16:59.031163  900582 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.031247  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.031407  900582 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1026 15:16:59.031462  900582 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.031506  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.031623  900582 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1026 15:16:59.031661  900582 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.031718  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.040691  900582 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1026 15:16:59.040808  900582 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.040857  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.051010  900582 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1026 15:16:59.051051  900582 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.051128  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:16:59.051185  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:59.051259  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:16:59.051332  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.051333  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.051383  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.051430  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.225768  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.225899  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.225989  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:59.226079  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:16:59.226170  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.226264  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.226348  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.380539  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:16:59.380665  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:16:59.380721  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.380804  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:16:59.380844  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:16:59.380881  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:16:59.380902  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:16:59.508353  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:16:59.508462  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:16:59.508531  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1026 15:16:59.508587  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1026 15:16:59.508633  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:16:59.508681  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:16:59.508778  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:16:59.508834  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:16:59.508898  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:16:59.508953  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:16:59.509015  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:16:59.509075  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:16:59.509121  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:16:59.547048  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1026 15:16:59.547088  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1026 15:16:59.547163  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1026 15:16:59.547180  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1026 15:16:59.547239  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1026 15:16:59.547256  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1026 15:16:59.547317  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1026 15:16:59.547335  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1026 15:16:59.547389  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1026 15:16:59.547414  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1026 15:16:59.547482  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:16:59.547580  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:16:59.547628  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1026 15:16:59.547645  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1026 15:16:59.605215  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1026 15:16:59.605294  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1026 15:16:59.646204  900582 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1026 15:16:59.646840  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1026 15:16:59.995124  900582 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1026 15:16:59.995375  900582 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:00.488634  900582 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1026 15:17:00.488771  900582 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:00.488886  900582 ssh_runner.go:195] Run: which crictl
	I1026 15:17:00.488946  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1026 15:17:00.553360  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:17:00.553613  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:17:00.610727  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1026 15:17:00.753392  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:03.202185  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:03.297597  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.743932043s)
	I1026 15:17:03.297632  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1026 15:17:03.297653  900582 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:17:03.297710  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:17:03.297780  900582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.686955396s)
	I1026 15:17:03.297822  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:05.933063  900582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.635211188s)
	I1026 15:17:05.933155  900582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:05.933292  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.635566754s)
	I1026 15:17:05.933309  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1026 15:17:05.933329  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:17:05.933363  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1026 15:17:05.203565  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:07.702117  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:07.590231  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.656842658s)
	I1026 15:17:07.590261  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1026 15:17:07.590281  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:17:07.590332  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:17:07.590410  900582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.657239463s)
	I1026 15:17:07.590438  900582 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1026 15:17:07.590506  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:17:09.738674  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.148296569s)
	I1026 15:17:09.738700  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1026 15:17:09.738720  900582 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:17:09.738780  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:17:09.738863  900582 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.148340434s)
	I1026 15:17:09.738879  900582 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1026 15:17:09.738894  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1026 15:17:11.763008  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.024205865s)
	I1026 15:17:11.763032  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1026 15:17:11.763054  900582 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:17:11.763106  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1026 15:17:09.703184  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:12.199799  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:16.196818  900582 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.433691989s)
	I1026 15:17:16.196842  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1026 15:17:16.196859  900582 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:17:16.196908  900582 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:17:16.795928  900582 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1026 15:17:16.795972  900582 cache_images.go:124] Successfully loaded all cached images
	I1026 15:17:16.795978  900582 cache_images.go:93] duration metric: took 18.184513306s to LoadCachedImages
	I1026 15:17:16.795988  900582 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:17:16.796076  900582 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-954807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:17:16.796165  900582 ssh_runner.go:195] Run: crio config
	I1026 15:17:16.850783  900582 cni.go:84] Creating CNI manager for ""
	I1026 15:17:16.850807  900582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:17:16.850827  900582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:17:16.850850  900582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-954807 NodeName:no-preload-954807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:17:16.850975  900582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-954807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:17:16.851056  900582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:17:16.859315  900582 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1026 15:17:16.859383  900582 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1026 15:17:16.867384  900582 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1026 15:17:16.867488  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1026 15:17:16.868233  900582 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubeadm
	I1026 15:17:16.868829  900582 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubelet
	I1026 15:17:16.872299  900582 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1026 15:17:16.872336  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	W1026 15:17:14.200034  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:16.200500  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:18.204577  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:17.605716  900582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:17:17.624012  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1026 15:17:17.630599  900582 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1026 15:17:17.630648  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1026 15:17:17.787599  900582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1026 15:17:17.797221  900582 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1026 15:17:17.797331  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1026 15:17:18.419024  900582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:17:18.429535  900582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:17:18.446625  900582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:17:18.463030  900582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:17:18.480843  900582 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:17:18.484916  900582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:17:18.497841  900582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:17:18.639807  900582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:17:18.658934  900582 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807 for IP: 192.168.85.2
	I1026 15:17:18.658952  900582 certs.go:195] generating shared ca certs ...
	I1026 15:17:18.658967  900582 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.659117  900582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:17:18.659159  900582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:17:18.659166  900582 certs.go:257] generating profile certs ...
	I1026 15:17:18.659220  900582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key
	I1026 15:17:18.659231  900582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt with IP's: []
	I1026 15:17:18.787373  900582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt ...
	I1026 15:17:18.787408  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: {Name:mk0b38f2ef642839cf190c25059aef2af5815488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.787616  900582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key ...
	I1026 15:17:18.787630  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key: {Name:mkb481055104e6ab4a7fbf16d12dbffb1867f6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.787725  900582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805
	I1026 15:17:18.787741  900582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 15:17:18.877353  900582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805 ...
	I1026 15:17:18.877382  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805: {Name:mkbba21210481a13581e84a46724d1c441fc5aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.877571  900582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805 ...
	I1026 15:17:18.877587  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805: {Name:mkee85646333967e3b12a6a150c6c8c1ddb64068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:18.877675  900582 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt.274c6805 -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt
	I1026 15:17:18.877752  900582 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805 -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key
	I1026 15:17:18.877813  900582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key
	I1026 15:17:18.877833  900582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt with IP's: []
	I1026 15:17:19.762654  900582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt ...
	I1026 15:17:19.762686  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt: {Name:mk940ee61c8e99ed61b9fa1cf3c2c6fbccc50b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:19.762890  900582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key ...
	I1026 15:17:19.762908  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key: {Name:mkde321844ba0beb806299e05ddc39a807993c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:19.763105  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:17:19.763154  900582 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:17:19.763168  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:17:19.763195  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:17:19.763222  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:17:19.763284  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:17:19.763336  900582 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:17:19.763902  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:17:19.783710  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:17:19.803891  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:17:19.830542  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:17:19.848579  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:17:19.866933  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:17:19.886877  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:17:19.904927  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:17:19.922935  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:17:19.942261  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:17:19.963432  900582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:17:19.981430  900582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:17:19.995431  900582 ssh_runner.go:195] Run: openssl version
	I1026 15:17:20.005405  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:17:20.021179  900582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:17:20.026815  900582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:17:20.026944  900582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:17:20.068945  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:17:20.078224  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:17:20.087003  900582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:17:20.091268  900582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:17:20.091391  900582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:17:20.135665  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:17:20.144298  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:17:20.154868  900582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:17:20.159150  900582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:17:20.159227  900582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:17:20.201721  900582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:17:20.210788  900582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:17:20.214548  900582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:17:20.214656  900582 kubeadm.go:400] StartCluster: {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:17:20.214746  900582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:17:20.214824  900582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:17:20.253168  900582 cri.go:89] found id: ""
	I1026 15:17:20.253321  900582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:17:20.263879  900582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:17:20.272470  900582 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:17:20.272588  900582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:17:20.290161  900582 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:17:20.290234  900582 kubeadm.go:157] found existing configuration files:
	
	I1026 15:17:20.290321  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:17:20.300167  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:17:20.300286  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:17:20.308139  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:17:20.317146  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:17:20.317232  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:17:20.324629  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:17:20.332511  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:17:20.332600  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:17:20.340417  900582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:17:20.348419  900582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:17:20.348500  900582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:17:20.356538  900582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:17:20.428511  900582 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:17:20.428831  900582 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:17:20.502987  900582 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 15:17:20.701598  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:23.202017  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:25.704334  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:28.202734  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:30.701412  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:33.199901  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:37.351970  900582 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:17:37.352038  900582 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:17:37.352134  900582 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:17:37.352197  900582 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 15:17:37.352237  900582 kubeadm.go:318] OS: Linux
	I1026 15:17:37.352288  900582 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:17:37.352341  900582 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 15:17:37.352394  900582 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:17:37.352447  900582 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:17:37.352502  900582 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:17:37.352557  900582 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:17:37.352607  900582 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:17:37.352662  900582 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:17:37.352747  900582 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 15:17:37.352833  900582 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:17:37.352935  900582 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:17:37.353032  900582 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:17:37.353101  900582 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:17:37.357115  900582 out.go:252]   - Generating certificates and keys ...
	I1026 15:17:37.357213  900582 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:17:37.357286  900582 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:17:37.357364  900582 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:17:37.357427  900582 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:17:37.357493  900582 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:17:37.357549  900582 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:17:37.357609  900582 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:17:37.357740  900582 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-954807] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:17:37.357799  900582 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:17:37.357926  900582 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-954807] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:17:37.357997  900582 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:17:37.358072  900582 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:17:37.358123  900582 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:17:37.358185  900582 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:17:37.358242  900582 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:17:37.358306  900582 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:17:37.358365  900582 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:17:37.358435  900582 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:17:37.358496  900582 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:17:37.358584  900582 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:17:37.358656  900582 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:17:37.361556  900582 out.go:252]   - Booting up control plane ...
	I1026 15:17:37.361674  900582 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:17:37.361774  900582 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:17:37.361849  900582 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:17:37.362008  900582 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:17:37.362123  900582 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:17:37.362242  900582 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:17:37.362337  900582 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:17:37.362398  900582 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:17:37.362567  900582 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:17:37.362715  900582 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:17:37.362797  900582 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001349413s
	I1026 15:17:37.362910  900582 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:17:37.363005  900582 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 15:17:37.363113  900582 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:17:37.363234  900582 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:17:37.363348  900582 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.070299815s
	I1026 15:17:37.363461  900582 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.410038844s
	I1026 15:17:37.363549  900582 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001520033s
	I1026 15:17:37.363665  900582 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:17:37.363799  900582 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:17:37.363873  900582 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:17:37.364070  900582 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-954807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:17:37.364132  900582 kubeadm.go:318] [bootstrap-token] Using token: 7jyxgn.utj0vxklu33lbfpx
	I1026 15:17:37.367300  900582 out.go:252]   - Configuring RBAC rules ...
	I1026 15:17:37.367434  900582 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:17:37.367535  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:17:37.367709  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:17:37.367873  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:17:37.368036  900582 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:17:37.368134  900582 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:17:37.368266  900582 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:17:37.368333  900582 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:17:37.368386  900582 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:17:37.368391  900582 kubeadm.go:318] 
	I1026 15:17:37.368454  900582 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:17:37.368459  900582 kubeadm.go:318] 
	I1026 15:17:37.368539  900582 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:17:37.368544  900582 kubeadm.go:318] 
	I1026 15:17:37.368577  900582 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:17:37.368640  900582 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:17:37.368829  900582 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:17:37.368837  900582 kubeadm.go:318] 
	I1026 15:17:37.368895  900582 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:17:37.368906  900582 kubeadm.go:318] 
	I1026 15:17:37.368956  900582 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:17:37.368964  900582 kubeadm.go:318] 
	I1026 15:17:37.369019  900582 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:17:37.369113  900582 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:17:37.369189  900582 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:17:37.369200  900582 kubeadm.go:318] 
	I1026 15:17:37.369290  900582 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:17:37.369374  900582 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:17:37.369382  900582 kubeadm.go:318] 
	I1026 15:17:37.369471  900582 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7jyxgn.utj0vxklu33lbfpx \
	I1026 15:17:37.369582  900582 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:17:37.369606  900582 kubeadm.go:318] 	--control-plane 
	I1026 15:17:37.369613  900582 kubeadm.go:318] 
	I1026 15:17:37.369702  900582 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:17:37.369709  900582 kubeadm.go:318] 
	I1026 15:17:37.369794  900582 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7jyxgn.utj0vxklu33lbfpx \
	I1026 15:17:37.369919  900582 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:17:37.369931  900582 cni.go:84] Creating CNI manager for ""
	I1026 15:17:37.369939  900582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:17:37.373097  900582 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:17:37.375959  900582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:17:37.383151  900582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:17:37.383173  900582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:17:37.405640  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1026 15:17:35.200536  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	W1026 15:17:37.200650  898916 pod_ready.go:104] pod "coredns-66bc5c9577-rkx49" is not "Ready", error: <nil>
	I1026 15:17:38.699978  898916 pod_ready.go:94] pod "coredns-66bc5c9577-rkx49" is "Ready"
	I1026 15:17:38.700010  898916 pod_ready.go:86] duration metric: took 40.005153866s for pod "coredns-66bc5c9577-rkx49" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.702861  898916 pod_ready.go:83] waiting for pod "etcd-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.707578  898916 pod_ready.go:94] pod "etcd-embed-certs-018497" is "Ready"
	I1026 15:17:38.707607  898916 pod_ready.go:86] duration metric: took 4.719265ms for pod "etcd-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.709958  898916 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.714982  898916 pod_ready.go:94] pod "kube-apiserver-embed-certs-018497" is "Ready"
	I1026 15:17:38.715013  898916 pod_ready.go:86] duration metric: took 5.026041ms for pod "kube-apiserver-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.717464  898916 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:38.898356  898916 pod_ready.go:94] pod "kube-controller-manager-embed-certs-018497" is "Ready"
	I1026 15:17:38.898428  898916 pod_ready.go:86] duration metric: took 180.93365ms for pod "kube-controller-manager-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:39.098895  898916 pod_ready.go:83] waiting for pod "kube-proxy-n7rjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:39.497889  898916 pod_ready.go:94] pod "kube-proxy-n7rjg" is "Ready"
	I1026 15:17:39.497920  898916 pod_ready.go:86] duration metric: took 398.998516ms for pod "kube-proxy-n7rjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:39.698043  898916 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:40.098182  898916 pod_ready.go:94] pod "kube-scheduler-embed-certs-018497" is "Ready"
	I1026 15:17:40.098214  898916 pod_ready.go:86] duration metric: took 400.140485ms for pod "kube-scheduler-embed-certs-018497" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:17:40.098227  898916 pod_ready.go:40] duration metric: took 41.464250896s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:17:40.166688  898916 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:17:40.169970  898916 out.go:179] * Done! kubectl is now configured to use "embed-certs-018497" cluster and "default" namespace by default
	I1026 15:17:37.731385  900582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:17:37.731547  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-954807 minikube.k8s.io/updated_at=2025_10_26T15_17_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=no-preload-954807 minikube.k8s.io/primary=true
	I1026 15:17:37.731550  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:37.754725  900582 ops.go:34] apiserver oom_adj: -16
	I1026 15:17:37.874891  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:38.375625  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:38.875641  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:39.374971  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:39.875871  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:40.375920  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:40.875047  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:41.375959  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:41.875504  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:42.374997  900582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:42.570941  900582 kubeadm.go:1113] duration metric: took 4.839466199s to wait for elevateKubeSystemPrivileges
	I1026 15:17:42.570967  900582 kubeadm.go:402] duration metric: took 22.356317103s to StartCluster
	I1026 15:17:42.570984  900582 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:42.571057  900582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:17:42.573129  900582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:42.573521  900582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:17:42.573957  900582 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:42.574065  900582 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:17:42.574142  900582 addons.go:69] Setting storage-provisioner=true in profile "no-preload-954807"
	I1026 15:17:42.574156  900582 addons.go:238] Setting addon storage-provisioner=true in "no-preload-954807"
	I1026 15:17:42.574177  900582 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:17:42.574796  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:17:42.574981  900582 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:17:42.575544  900582 addons.go:69] Setting default-storageclass=true in profile "no-preload-954807"
	I1026 15:17:42.575575  900582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-954807"
	I1026 15:17:42.575898  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:17:42.579179  900582 out.go:179] * Verifying Kubernetes components...
	I1026 15:17:42.582243  900582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:17:42.625626  900582 addons.go:238] Setting addon default-storageclass=true in "no-preload-954807"
	I1026 15:17:42.625674  900582 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:17:42.626137  900582 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:17:42.626332  900582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:17:42.629931  900582 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:17:42.629956  900582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:17:42.630021  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:17:42.660199  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:17:42.676836  900582 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:17:42.676859  900582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:17:42.676926  900582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:17:42.701968  900582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:17:42.914222  900582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:17:42.922664  900582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:17:43.000465  900582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:17:43.066049  900582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:17:43.843646  900582 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 15:17:43.846535  900582 node_ready.go:35] waiting up to 6m0s for node "no-preload-954807" to be "Ready" ...
	I1026 15:17:44.357935  900582 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-954807" context rescaled to 1 replicas
	I1026 15:17:44.375933  900582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309838056s)
	I1026 15:17:44.379662  900582 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 15:17:44.382673  900582 addons.go:514] duration metric: took 1.808586284s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1026 15:17:45.850300  900582 node_ready.go:57] node "no-preload-954807" has "Ready":"False" status (will retry)
	W1026 15:17:48.350068  900582 node_ready.go:57] node "no-preload-954807" has "Ready":"False" status (will retry)
	W1026 15:17:50.849595  900582 node_ready.go:57] node "no-preload-954807" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 15:17:24 embed-certs-018497 crio[653]: time="2025-10-26T15:17:24.454875358Z" level=info msg="Removed container e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4/dashboard-metrics-scraper" id=f633f597-755a-47c1-b2ab-2bf22b92600d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:17:27 embed-certs-018497 conmon[1141]: conmon 2c5a5ec5efcaa7b4cb46 <ninfo>: container 1151 exited with status 1
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.440070865Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=510e32a0-7229-478f-bd13-b77991e03f73 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.443885135Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d7bed5a0-0f8b-4943-89b5-66b9ec157ce9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.44942781Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec801b90-23f8-42c9-b6b0-1e2d7b910641 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.449717979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.465089562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.466615832Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b64e7b5d3845c7b61c8dbabd1e610ed457588898891e98da3b3f13e5738de5e9/merged/etc/passwd: no such file or directory"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.466774906Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b64e7b5d3845c7b61c8dbabd1e610ed457588898891e98da3b3f13e5738de5e9/merged/etc/group: no such file or directory"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.467135977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.48644394Z" level=info msg="Created container fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109: kube-system/storage-provisioner/storage-provisioner" id=ec801b90-23f8-42c9-b6b0-1e2d7b910641 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.490457785Z" level=info msg="Starting container: fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109" id=421a93fc-09b6-4872-ad42-2ae72d4cd389 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:17:27 embed-certs-018497 crio[653]: time="2025-10-26T15:17:27.501012445Z" level=info msg="Started container" PID=1651 containerID=fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109 description=kube-system/storage-provisioner/storage-provisioner id=421a93fc-09b6-4872-ad42-2ae72d4cd389 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9da72ea92b352b5fc9be1a5d901935711b56424d52758a82a4c06cc753e65c88
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.16544206Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.173772416Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.173958822Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.174047291Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.17775038Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.177907116Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.17798971Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.18335365Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.183522112Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.183595491Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.187643002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:17:37 embed-certs-018497 crio[653]: time="2025-10-26T15:17:37.187798212Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fc411acb1c8fd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   9da72ea92b352       storage-provisioner                          kube-system
	ce2dbffab4910       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   d5aafdbc03343       dashboard-metrics-scraper-6ffb444bf9-m58x4   kubernetes-dashboard
	65acd0d0bd415       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   561c23effa4ff       kubernetes-dashboard-855c9754f9-85vnc        kubernetes-dashboard
	e43d91bb5e3e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   69ce496f9d9e2       coredns-66bc5c9577-rkx49                     kube-system
	8cffe56f508af       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   386d85863d10d       busybox                                      default
	2c5a5ec5efcaa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   9da72ea92b352       storage-provisioner                          kube-system
	03db0d606c127       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   3fb084e4674be       kube-proxy-n7rjg                             kube-system
	a544d2cd71d6e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   f7d8425f9507f       kindnet-gxpz7                                kube-system
	090aba612ed4b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5c9de3eec7252       kube-scheduler-embed-certs-018497            kube-system
	409f07111dd90       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   121bb82f81152       kube-apiserver-embed-certs-018497            kube-system
	3bd8efc1a4f43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   c4a42bc4acf8e       kube-controller-manager-embed-certs-018497   kube-system
	d9c73ce88247b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d0b1b63d1d5b3       etcd-embed-certs-018497                      kube-system
	
	
	==> coredns [e43d91bb5e3e6317a58891cd2e1ffa985b52cdbecb3fc66c4cb6d88beed6bb9a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43109 - 59388 "HINFO IN 6632748502928444588.6955070418265700300. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014342739s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-018497
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-018497
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-018497
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_15_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:15:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-018497
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:17:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:15:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:17:26 +0000   Sun, 26 Oct 2025 15:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-018497
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                072f2fa1-40d7-443d-9b77-e971842fc752
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-rkx49                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-embed-certs-018497                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m34s
	  kube-system                 kindnet-gxpz7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m29s
	  kube-system                 kube-apiserver-embed-certs-018497             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-018497    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-n7rjg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-scheduler-embed-certs-018497             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m58x4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-85vnc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m27s              kube-proxy       
	  Normal   Starting                 59s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m34s              kubelet          Node embed-certs-018497 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m34s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s              kubelet          Node embed-certs-018497 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s              kubelet          Node embed-certs-018497 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m34s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m30s              node-controller  Node embed-certs-018497 event: Registered Node embed-certs-018497 in Controller
	  Normal   NodeReady                107s               kubelet          Node embed-certs-018497 status is now: NodeReady
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node embed-certs-018497 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node embed-certs-018497 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node embed-certs-018497 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-018497 event: Registered Node embed-certs-018497 in Controller
	
	
	==> dmesg <==
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d9c73ce88247ba1adf8bd4c1adb21fbde20fbb8f116f5668140518ad1d06a540] <==
	{"level":"warn","ts":"2025-10-26T15:16:53.779787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.792668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.816754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.838788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.855289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.873308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.890824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.902851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.919264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.941133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.957573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.971169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:53.993241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.017656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.037652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.061078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.073379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.093451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.108769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.137000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.147061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.184928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.204073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.282133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:16:54.340971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34874","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:17:57 up  5:00,  0 user,  load average: 4.40, 3.61, 3.05
	Linux embed-certs-018497 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a544d2cd71d6e7dbf96a6029fcb84048899600d50410fd953e7e9825ae6d54e4] <==
	I1026 15:16:56.945972       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:16:56.946184       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:16:56.946315       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:16:56.946328       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:16:56.946341       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:16:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:16:57.161206       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:16:57.161300       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:16:57.161357       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:16:57.162206       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:17:27.162458       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:17:27.162565       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:17:27.162648       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:17:27.162724       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:17:28.661658       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:17:28.661774       1 metrics.go:72] Registering metrics
	I1026 15:17:28.661873       1 controller.go:711] "Syncing nftables rules"
	I1026 15:17:37.164827       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:17:37.164873       1 main.go:301] handling current node
	I1026 15:17:47.167831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:17:47.167866       1 main.go:301] handling current node
	I1026 15:17:57.168912       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:17:57.169237       1 main.go:301] handling current node
	
	
	==> kube-apiserver [409f07111dd907cacc317d458d0d45621bc1a541c5c465d80bca7519c1adbc45] <==
	I1026 15:16:55.982319       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:16:55.982561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:16:55.982629       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:16:55.982637       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:16:55.982991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:16:55.985542       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:16:55.985555       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:16:55.985560       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:16:55.985566       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:16:55.997594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:16:56.022901       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:16:56.022955       1 policy_source.go:240] refreshing policies
	I1026 15:16:56.028643       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1026 15:16:56.099297       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:16:56.158799       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:16:56.463750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:16:58.012168       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:16:58.175487       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:16:58.319245       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:16:58.407570       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:16:58.546910       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.38.171"}
	I1026 15:16:58.570125       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.237.136"}
	I1026 15:17:00.751163       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:17:00.784230       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:17:01.035693       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3bd8efc1a4f432d7fc33248f86d12e98374d3b114c1ff55bf1e4ebba272ddcd3] <==
	I1026 15:17:00.665230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:17:00.669583       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:17:00.670965       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:17:00.673181       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:17:00.673535       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:17:00.673932       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:17:00.673965       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:17:00.680271       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:17:00.680618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:17:00.684956       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:17:00.686684       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:17:00.704093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:17:00.709767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:17:00.720092       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:17:00.720232       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:17:00.720746       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:17:00.720809       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:17:00.721500       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-018497"
	I1026 15:17:00.721605       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:17:00.726044       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:17:00.726102       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:17:00.741991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:17:00.800595       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:17:00.800642       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:17:00.800651       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [03db0d606c127fce8efea05cc20d5e89e56ed82af785cf24f1a16c72af21e767] <==
	I1026 15:16:57.918562       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:16:58.283975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:16:58.392772       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:16:58.392909       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:16:58.400896       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:16:58.628331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:16:58.628471       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:16:58.634688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:16:58.635064       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:16:58.636727       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:16:58.638561       1 config.go:200] "Starting service config controller"
	I1026 15:16:58.638646       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:16:58.638691       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:16:58.638718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:16:58.638754       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:16:58.638781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:16:58.639495       1 config.go:309] "Starting node config controller"
	I1026 15:16:58.641437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:16:58.641467       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:16:58.739546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:16:58.739549       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:16:58.739566       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [090aba612ed4b432cc3651a2a65ff1462aa79aa555f252a9e907d3503d8585ef] <==
	I1026 15:16:53.633638       1 serving.go:386] Generated self-signed cert in-memory
	I1026 15:16:57.672079       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:16:57.672178       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:16:57.697401       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:16:57.697545       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 15:16:57.697570       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 15:16:57.697613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:16:57.728090       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:16:57.742997       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:16:57.742636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:16:57.743045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:16:57.851619       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:16:57.851680       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:16:57.897647       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.373072     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8ff\" (UniqueName: \"kubernetes.io/projected/f5ef036c-7b62-4cca-a13d-a421490f29ac-kube-api-access-dg8ff\") pod \"dashboard-metrics-scraper-6ffb444bf9-m58x4\" (UID: \"f5ef036c-7b62-4cca-a13d-a421490f29ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4"
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.373741     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2a3da6ff-3ac6-4c07-bf84-71014b0de0c8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-85vnc\" (UID: \"2a3da6ff-3ac6-4c07-bf84-71014b0de0c8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-85vnc"
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.373924     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpk2\" (UniqueName: \"kubernetes.io/projected/2a3da6ff-3ac6-4c07-bf84-71014b0de0c8-kube-api-access-6xpk2\") pod \"kubernetes-dashboard-855c9754f9-85vnc\" (UID: \"2a3da6ff-3ac6-4c07-bf84-71014b0de0c8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-85vnc"
	Oct 26 15:17:01 embed-certs-018497 kubelet[779]: I1026 15:17:01.374056     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5ef036c-7b62-4cca-a13d-a421490f29ac-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-m58x4\" (UID: \"f5ef036c-7b62-4cca-a13d-a421490f29ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4"
	Oct 26 15:17:08 embed-certs-018497 kubelet[779]: I1026 15:17:08.377225     779 scope.go:117] "RemoveContainer" containerID="815e3e02c237486d3f53689ba03841be3cb7a070dbb17980dacf98e511f267d1"
	Oct 26 15:17:09 embed-certs-018497 kubelet[779]: I1026 15:17:09.387911     779 scope.go:117] "RemoveContainer" containerID="815e3e02c237486d3f53689ba03841be3cb7a070dbb17980dacf98e511f267d1"
	Oct 26 15:17:09 embed-certs-018497 kubelet[779]: I1026 15:17:09.388315     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:09 embed-certs-018497 kubelet[779]: E1026 15:17:09.390987     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:10 embed-certs-018497 kubelet[779]: I1026 15:17:10.391914     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:10 embed-certs-018497 kubelet[779]: E1026 15:17:10.392091     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:11 embed-certs-018497 kubelet[779]: I1026 15:17:11.547317     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:11 embed-certs-018497 kubelet[779]: E1026 15:17:11.547496     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:23 embed-certs-018497 kubelet[779]: I1026 15:17:23.962762     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: I1026 15:17:24.429142     779 scope.go:117] "RemoveContainer" containerID="e8f28eba26cba32c65ed1060118c77a9fa7da416fb426238bf850cf05a673d91"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: I1026 15:17:24.429785     779 scope.go:117] "RemoveContainer" containerID="ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: E1026 15:17:24.430058     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:24 embed-certs-018497 kubelet[779]: I1026 15:17:24.461547     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-85vnc" podStartSLOduration=10.316163864 podStartE2EDuration="23.461529159s" podCreationTimestamp="2025-10-26 15:17:01 +0000 UTC" firstStartedPulling="2025-10-26 15:17:01.664044877 +0000 UTC m=+14.870179454" lastFinishedPulling="2025-10-26 15:17:14.80941019 +0000 UTC m=+28.015544749" observedRunningTime="2025-10-26 15:17:15.433654574 +0000 UTC m=+28.639789142" watchObservedRunningTime="2025-10-26 15:17:24.461529159 +0000 UTC m=+37.667663719"
	Oct 26 15:17:27 embed-certs-018497 kubelet[779]: I1026 15:17:27.438771     779 scope.go:117] "RemoveContainer" containerID="2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1"
	Oct 26 15:17:31 embed-certs-018497 kubelet[779]: I1026 15:17:31.547045     779 scope.go:117] "RemoveContainer" containerID="ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	Oct 26 15:17:31 embed-certs-018497 kubelet[779]: E1026 15:17:31.547242     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:42 embed-certs-018497 kubelet[779]: I1026 15:17:42.963398     779 scope.go:117] "RemoveContainer" containerID="ce2dbffab4910e828e51fdfdfd6f5533cd303433fbaeb1a950333fce0d2ba7df"
	Oct 26 15:17:42 embed-certs-018497 kubelet[779]: E1026 15:17:42.963620     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m58x4_kubernetes-dashboard(f5ef036c-7b62-4cca-a13d-a421490f29ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m58x4" podUID="f5ef036c-7b62-4cca-a13d-a421490f29ac"
	Oct 26 15:17:52 embed-certs-018497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:17:52 embed-certs-018497 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:17:52 embed-certs-018497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [65acd0d0bd4152422d5b3b852f04705e7b5bc36efce35381af401cfd45e8efe0] <==
	2025/10/26 15:17:14 Using namespace: kubernetes-dashboard
	2025/10/26 15:17:14 Using in-cluster config to connect to apiserver
	2025/10/26 15:17:14 Using secret token for csrf signing
	2025/10/26 15:17:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:17:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:17:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:17:14 Generating JWE encryption key
	2025/10/26 15:17:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:17:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:17:15 Initializing JWE encryption key from synchronized object
	2025/10/26 15:17:15 Creating in-cluster Sidecar client
	2025/10/26 15:17:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:17:15 Serving insecurely on HTTP port: 9090
	2025/10/26 15:17:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:17:14 Starting overwatch
	
	
	==> storage-provisioner [2c5a5ec5efcaa7b4cb46652fe1ea6fe32cdbf87447453fd57b92c3b7356d86d1] <==
	I1026 15:16:57.175332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:17:27.180899       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fc411acb1c8fded25338c122082b2fbbe3225e28f8198356f3a9c4ac9f758109] <==
	W1026 15:17:27.534824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:30.990981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:35.251804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:38.850629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:41.903754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:44.927149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:44.932951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:17:44.933230       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:17:44.933508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-018497_8b7ba958-996a-47c0-891b-14dd7e17eca4!
	I1026 15:17:44.934266       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16612a96-da08-4714-84ae-ba8e387bd6f2", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-018497_8b7ba958-996a-47c0-891b-14dd7e17eca4 became leader
	W1026 15:17:44.939176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:44.968410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:17:45.033888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-018497_8b7ba958-996a-47c0-891b-14dd7e17eca4!
	W1026 15:17:46.972390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:46.977926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:48.981076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:48.985852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:50.988483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:50.995095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:52.998765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:53.012925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:55.017678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:55.025606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:57.028461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:57.042901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-018497 -n embed-certs-018497
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-018497 -n embed-certs-018497: exit status 2 (388.778469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-018497 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (358.167955ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:18:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-954807 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-954807 describe deploy/metrics-server -n kube-system: exit status 1 (127.166704ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-954807 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-954807
helpers_test.go:243: (dbg) docker inspect no-preload-954807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786",
	        "Created": "2025-10-26T15:16:49.517959935Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 901147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:16:49.752926894Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/hostname",
	        "HostsPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/hosts",
	        "LogPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786-json.log",
	        "Name": "/no-preload-954807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-954807:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-954807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786",
	                "LowerDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-954807",
	                "Source": "/var/lib/docker/volumes/no-preload-954807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-954807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-954807",
	                "name.minikube.sigs.k8s.io": "no-preload-954807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c88f58bd5e4e48fe94ba3274beea249bb9ae6deddb1899dfd0a3830afa1a52a4",
	            "SandboxKey": "/var/run/docker/netns/c88f58bd5e4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-954807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:ac:65:55:e9:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85855106e1f3577e90f02f145412c517c0b5aba224f5d8005b2109486b8acb25",
	                    "EndpointID": "8f69cb89f414f4a0fc3df6cf4ff7a74f09e19e91de42ba6daa75f3e0024e84d0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-954807",
	                        "974a34e5ba04"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-954807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-954807 logs -n 25: (1.412607746s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-209492                                                                                                                                                                                                                        │ cert-options-209492          │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-304880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p old-k8s-version-304880 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:14 UTC │
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                                                                                               │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                                                                                     │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                                                                                               │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                                                                                                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:18:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:18:02.081922  906105 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:18:02.082187  906105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:02.082222  906105 out.go:374] Setting ErrFile to fd 2...
	I1026 15:18:02.082242  906105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:02.082578  906105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:18:02.083052  906105 out.go:368] Setting JSON to false
	I1026 15:18:02.084046  906105 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18034,"bootTime":1761473848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:18:02.084154  906105 start.go:141] virtualization:  
	I1026 15:18:02.088154  906105 out.go:179] * [default-k8s-diff-port-494684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:18:02.092410  906105 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:18:02.092559  906105 notify.go:220] Checking for updates...
	I1026 15:18:02.098754  906105 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:18:02.101920  906105 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:02.104925  906105 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:18:02.107952  906105 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:18:02.110963  906105 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:18:02.114542  906105 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:02.114654  906105 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:18:02.144831  906105 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:18:02.144964  906105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:02.205559  906105 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:18:02.196057406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:02.205685  906105 docker.go:318] overlay module found
	I1026 15:18:02.208900  906105 out.go:179] * Using the docker driver based on user configuration
	I1026 15:18:02.211870  906105 start.go:305] selected driver: docker
	I1026 15:18:02.211898  906105 start.go:925] validating driver "docker" against <nil>
	I1026 15:18:02.211912  906105 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:18:02.212657  906105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:02.269779  906105 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:18:02.260278143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:02.269934  906105 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:18:02.270162  906105 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:02.273256  906105 out.go:179] * Using Docker driver with root privileges
	I1026 15:18:02.276134  906105 cni.go:84] Creating CNI manager for ""
	I1026 15:18:02.276210  906105 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:02.276228  906105 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:18:02.276312  906105 start.go:349] cluster config:
	{Name:default-k8s-diff-port-494684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:02.279557  906105 out.go:179] * Starting "default-k8s-diff-port-494684" primary control-plane node in "default-k8s-diff-port-494684" cluster
	I1026 15:18:02.282481  906105 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:18:02.285558  906105 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:18:02.288461  906105 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:02.288528  906105 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:18:02.288540  906105 cache.go:58] Caching tarball of preloaded images
	I1026 15:18:02.288547  906105 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:18:02.288628  906105 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:18:02.288638  906105 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:18:02.288792  906105 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/config.json ...
	I1026 15:18:02.288817  906105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/config.json: {Name:mk2e164a27e6478ad8aff547579009d612d1813a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:02.308252  906105 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:18:02.308275  906105 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:18:02.308297  906105 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:18:02.308322  906105 start.go:360] acquireMachinesLock for default-k8s-diff-port-494684: {Name:mk0ed1a7373f921811143d09c40dcffb09852703 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:02.308442  906105 start.go:364] duration metric: took 103.706µs to acquireMachinesLock for "default-k8s-diff-port-494684"
	I1026 15:18:02.308471  906105 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-494684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:02.308556  906105 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:18:02.312017  906105 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:18:02.312304  906105 start.go:159] libmachine.API.Create for "default-k8s-diff-port-494684" (driver="docker")
	I1026 15:18:02.312433  906105 client.go:168] LocalClient.Create starting
	I1026 15:18:02.312521  906105 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:18:02.312559  906105 main.go:141] libmachine: Decoding PEM data...
	I1026 15:18:02.312591  906105 main.go:141] libmachine: Parsing certificate...
	I1026 15:18:02.312654  906105 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:18:02.312680  906105 main.go:141] libmachine: Decoding PEM data...
	I1026 15:18:02.312690  906105 main.go:141] libmachine: Parsing certificate...
	I1026 15:18:02.313208  906105 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-494684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:18:02.331136  906105 cli_runner.go:211] docker network inspect default-k8s-diff-port-494684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:18:02.331217  906105 network_create.go:284] running [docker network inspect default-k8s-diff-port-494684] to gather additional debugging logs...
	I1026 15:18:02.331235  906105 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-494684
	W1026 15:18:02.348402  906105 cli_runner.go:211] docker network inspect default-k8s-diff-port-494684 returned with exit code 1
	I1026 15:18:02.348429  906105 network_create.go:287] error running [docker network inspect default-k8s-diff-port-494684]: docker network inspect default-k8s-diff-port-494684: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-494684 not found
	I1026 15:18:02.348448  906105 network_create.go:289] output of [docker network inspect default-k8s-diff-port-494684]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-494684 not found
	
	** /stderr **
	I1026 15:18:02.348570  906105 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:18:02.366906  906105 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:18:02.367263  906105 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:18:02.367643  906105 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:18:02.368077  906105 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd640}
	I1026 15:18:02.368104  906105 network_create.go:124] attempt to create docker network default-k8s-diff-port-494684 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:18:02.368173  906105 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-494684 default-k8s-diff-port-494684
	I1026 15:18:02.429191  906105 network_create.go:108] docker network default-k8s-diff-port-494684 192.168.76.0/24 created
	I1026 15:18:02.429230  906105 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-494684" container
	I1026 15:18:02.429306  906105 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:18:02.446722  906105 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-494684 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-494684 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:18:02.465911  906105 oci.go:103] Successfully created a docker volume default-k8s-diff-port-494684
	I1026 15:18:02.466071  906105 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-494684-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-494684 --entrypoint /usr/bin/test -v default-k8s-diff-port-494684:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:18:03.074056  906105 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-494684
	I1026 15:18:03.074146  906105 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:03.074168  906105 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:18:03.074284  906105 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-494684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 26 15:17:57 no-preload-954807 crio[839]: time="2025-10-26T15:17:57.720942073Z" level=info msg="Created container 28f9f1fd3e59b275d994ea14cb68fc78890708db4b20d0eda7d16a4b0fc2de60: kube-system/storage-provisioner/storage-provisioner" id=fffad302-fd8a-4ed1-8e37-7a59bd1f0e92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:17:57 no-preload-954807 crio[839]: time="2025-10-26T15:17:57.722208335Z" level=info msg="Starting container: 28f9f1fd3e59b275d994ea14cb68fc78890708db4b20d0eda7d16a4b0fc2de60" id=5a54418d-d5e7-4d54-8ce3-bcb2a17018d1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:17:57 no-preload-954807 crio[839]: time="2025-10-26T15:17:57.724131819Z" level=info msg="Started container" PID=2499 containerID=28f9f1fd3e59b275d994ea14cb68fc78890708db4b20d0eda7d16a4b0fc2de60 description=kube-system/storage-provisioner/storage-provisioner id=5a54418d-d5e7-4d54-8ce3-bcb2a17018d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67e0c62f4c1b6cdeccbea1e8b2e479a55c7dda8b93db9bf9c2c7f929cac3fb6a
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.357367888Z" level=info msg="Running pod sandbox: default/busybox/POD" id=197aaa53-63a7-455b-9ff5-67969550c03f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.357454125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.368057048Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e UID:9a8dabc7-7557-4a48-8806-6fd5fee80256 NetNS:/var/run/netns/55e76a5f-09df-46fa-9a52-8ef6648d8c54 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001400900}] Aliases:map[]}"
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.368117783Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.381167699Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e UID:9a8dabc7-7557-4a48-8806-6fd5fee80256 NetNS:/var/run/netns/55e76a5f-09df-46fa-9a52-8ef6648d8c54 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001400900}] Aliases:map[]}"
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.381499961Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.388259741Z" level=info msg="Ran pod sandbox bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e with infra container: default/busybox/POD" id=197aaa53-63a7-455b-9ff5-67969550c03f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.389897699Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=188d9748-945a-4017-b616-5e5bab86140b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.390157435Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=188d9748-945a-4017-b616-5e5bab86140b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.39027724Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=188d9748-945a-4017-b616-5e5bab86140b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.392915103Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82c14ad1-cc75-47c7-b7fa-f9304c8d3359 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:18:01 no-preload-954807 crio[839]: time="2025-10-26T15:18:01.398531255Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.521935815Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=82c14ad1-cc75-47c7-b7fa-f9304c8d3359 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.522952958Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1140359-862d-430f-b944-204add7f344e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.526933187Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cb40934c-8490-4888-8bb8-fdbeb051e76f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.536153374Z" level=info msg="Creating container: default/busybox/busybox" id=c19cb4cc-e5c6-4b5e-8551-970199edffbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.536603611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.545408648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.546099388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.566415944Z" level=info msg="Created container 8a0780eb15c130e22f86084f14879c3e03d5720981253fa8c9e73424a86ede1c: default/busybox/busybox" id=c19cb4cc-e5c6-4b5e-8551-970199edffbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.570785199Z" level=info msg="Starting container: 8a0780eb15c130e22f86084f14879c3e03d5720981253fa8c9e73424a86ede1c" id=80ae413e-c273-40e1-98c8-8f53b4b0cf20 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:18:03 no-preload-954807 crio[839]: time="2025-10-26T15:18:03.575498285Z" level=info msg="Started container" PID=2552 containerID=8a0780eb15c130e22f86084f14879c3e03d5720981253fa8c9e73424a86ede1c description=default/busybox/busybox id=80ae413e-c273-40e1-98c8-8f53b4b0cf20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8a0780eb15c13       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   bae4706c26a2b       busybox                                     default
	28f9f1fd3e59b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   67e0c62f4c1b6       storage-provisioner                         kube-system
	73f5d8729e1dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   f9f41582d2ece       coredns-66bc5c9577-7xjmh                    kube-system
	0ac4281b09084       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   1cc490aa13728       kindnet-9grs2                               kube-system
	561a2363a2cf6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   b9a0b45febff5       kube-proxy-q8nns                            kube-system
	72e9bc2907bc3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      40 seconds ago      Running             kube-scheduler            0                   c20d72fc8c22e       kube-scheduler-no-preload-954807            kube-system
	8cd50f7ae06ef       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      41 seconds ago      Running             etcd                      0                   fce2d7e3490c6       etcd-no-preload-954807                      kube-system
	76beeafb22a59       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      41 seconds ago      Running             kube-apiserver            0                   098d1cbbcced1       kube-apiserver-no-preload-954807            kube-system
	42d8dd83e507e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      41 seconds ago      Running             kube-controller-manager   0                   347bfaabaeeab       kube-controller-manager-no-preload-954807   kube-system
	
	
	==> coredns [73f5d8729e1dc8311d99bc77e08db6bee72be0fc5b06d5a7a85caccc0f9182cb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44715 - 7544 "HINFO IN 6372644210443231063.3884242673980215742. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007917199s
	
	
	==> describe nodes <==
	Name:               no-preload-954807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-954807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=no-preload-954807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_17_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-954807
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:18:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:18:07 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:18:07 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:18:07 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:18:07 +0000   Sun, 26 Oct 2025 15:17:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-954807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c4720016-79cb-477b-b38d-c7121463d568
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-7xjmh                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-954807                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-9grs2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-954807             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-954807    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-q8nns                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-954807             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 42s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-954807 event: Registered Node no-preload-954807 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-954807 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 14:54] overlayfs: idmapped layers are currently not supported
	[Oct26 14:55] overlayfs: idmapped layers are currently not supported
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8cd50f7ae06ef63aa93bed4f9a0992540bd75075691f8b7d004ef94c0cc722e6] <==
	{"level":"warn","ts":"2025-10-26T15:17:32.897658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:32.917499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:32.944983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:32.972331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:32.978809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.042124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.061212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.089175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.100606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.117306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.135174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.152931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.170661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.187582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.209100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.227171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.242782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.260080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.281694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.300924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.318027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.345531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.357850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.379560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:17:33.454222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43464","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:18:11 up  5:00,  0 user,  load average: 4.03, 3.56, 3.04
	Linux no-preload-954807 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0ac4281b09084be26501a833af40b73a35992b6084444945650cc475cb6ed8f2] <==
	I1026 15:17:46.526635       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:17:46.527250       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:17:46.527383       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:17:46.527396       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:17:46.527410       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:17:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:17:46.734455       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:17:46.734482       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:17:46.734491       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:17:46.734602       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:17:46.935684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:17:46.935721       1 metrics.go:72] Registering metrics
	I1026 15:17:46.935780       1 controller.go:711] "Syncing nftables rules"
	I1026 15:17:56.740765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:17:56.740829       1 main.go:301] handling current node
	I1026 15:18:06.735408       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:18:06.735448       1 main.go:301] handling current node
	
	
	==> kube-apiserver [76beeafb22a599ba0ef92fd692d20c46e158dee59243490b3b5b0347db6dd2ff] <==
	I1026 15:17:34.361660       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:17:34.361691       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:17:34.361732       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:17:34.373038       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:17:34.401928       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:17:34.402908       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:17:34.416153       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:17:34.416827       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:17:35.023290       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:17:35.029229       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:17:35.029318       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:17:35.810500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:17:35.873132       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:17:36.039244       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:17:36.058209       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 15:17:36.061652       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:17:36.070119       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:17:36.248421       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:17:36.768919       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:17:36.787438       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:17:36.803403       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:17:41.453596       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:17:41.469295       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:17:42.002135       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:17:42.150646       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [42d8dd83e507ed6821ae194aee88e01d33b85f07452ceda4cc3cea3ae43383b7] <==
	I1026 15:17:41.286212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:17:41.291555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:17:41.291586       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:17:41.291597       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:17:41.292061       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:17:41.292264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:17:41.292399       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:17:41.292757       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:17:41.293482       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:17:41.293703       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:17:41.293889       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:17:41.293965       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:17:41.294239       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:17:41.294368       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:17:41.295254       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:17:41.295361       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:17:41.297760       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:17:41.297847       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:17:41.297924       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-954807"
	I1026 15:17:41.297969       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:17:41.298329       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:17:41.298946       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:17:41.299542       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:17:41.302644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:18:01.300602       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [561a2363a2cf6b1f50861f0d6816962e5cf386f249b04f7caa310f1924b63d26] <==
	I1026 15:17:42.634510       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:17:42.756804       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:17:42.873118       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:17:42.873153       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:17:42.873241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:17:42.945497       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:17:42.945553       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:17:42.950941       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:17:42.951255       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:17:42.951269       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:17:42.952493       1 config.go:200] "Starting service config controller"
	I1026 15:17:42.952504       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:17:42.952530       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:17:42.952535       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:17:42.952545       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:17:42.952549       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:17:42.953494       1 config.go:309] "Starting node config controller"
	I1026 15:17:42.953504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:17:42.953511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:17:43.052651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:17:43.052684       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:17:43.052851       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [72e9bc2907bc36ac99e125e6140b64751617087fb53ac15a5a3b61958634476e] <==
	E1026 15:17:34.341761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:17:34.341951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:17:34.342039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:17:34.342127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:17:34.342338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:17:34.352983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:17:34.353009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:17:34.353064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:17:34.353127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:17:34.353180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:17:34.353216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:17:34.353303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:17:34.353593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:17:35.161930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:17:35.172375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:17:35.224904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:17:35.238463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:17:35.268457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 15:17:35.282804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:17:35.307529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:17:35.334743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:17:35.411232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:17:35.419591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:17:35.494811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1026 15:17:37.625342       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:17:38 no-preload-954807 kubelet[2007]: I1026 15:17:38.062197    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-954807" podStartSLOduration=1.062176528 podStartE2EDuration="1.062176528s" podCreationTimestamp="2025-10-26 15:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:17:38.043108297 +0000 UTC m=+1.430147386" watchObservedRunningTime="2025-10-26 15:17:38.062176528 +0000 UTC m=+1.449215617"
	Oct 26 15:17:41 no-preload-954807 kubelet[2007]: I1026 15:17:41.265764    2007 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 15:17:41 no-preload-954807 kubelet[2007]: I1026 15:17:41.266488    2007 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.080901    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f407a5bf-332b-4393-8250-e22d40da01f9-lib-modules\") pod \"kube-proxy-q8nns\" (UID: \"f407a5bf-332b-4393-8250-e22d40da01f9\") " pod="kube-system/kube-proxy-q8nns"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.080964    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24f115af-1173-42c3-a38d-af5044b515d6-xtables-lock\") pod \"kindnet-9grs2\" (UID: \"24f115af-1173-42c3-a38d-af5044b515d6\") " pod="kube-system/kindnet-9grs2"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.080991    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f407a5bf-332b-4393-8250-e22d40da01f9-xtables-lock\") pod \"kube-proxy-q8nns\" (UID: \"f407a5bf-332b-4393-8250-e22d40da01f9\") " pod="kube-system/kube-proxy-q8nns"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.081008    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f115af-1173-42c3-a38d-af5044b515d6-lib-modules\") pod \"kindnet-9grs2\" (UID: \"24f115af-1173-42c3-a38d-af5044b515d6\") " pod="kube-system/kindnet-9grs2"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.081062    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmmb\" (UniqueName: \"kubernetes.io/projected/24f115af-1173-42c3-a38d-af5044b515d6-kube-api-access-pbmmb\") pod \"kindnet-9grs2\" (UID: \"24f115af-1173-42c3-a38d-af5044b515d6\") " pod="kube-system/kindnet-9grs2"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.081083    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f407a5bf-332b-4393-8250-e22d40da01f9-kube-proxy\") pod \"kube-proxy-q8nns\" (UID: \"f407a5bf-332b-4393-8250-e22d40da01f9\") " pod="kube-system/kube-proxy-q8nns"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.081101    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b6mg\" (UniqueName: \"kubernetes.io/projected/f407a5bf-332b-4393-8250-e22d40da01f9-kube-api-access-5b6mg\") pod \"kube-proxy-q8nns\" (UID: \"f407a5bf-332b-4393-8250-e22d40da01f9\") " pod="kube-system/kube-proxy-q8nns"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.081134    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/24f115af-1173-42c3-a38d-af5044b515d6-cni-cfg\") pod \"kindnet-9grs2\" (UID: \"24f115af-1173-42c3-a38d-af5044b515d6\") " pod="kube-system/kindnet-9grs2"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.200278    2007 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 15:17:42 no-preload-954807 kubelet[2007]: I1026 15:17:42.929612    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q8nns" podStartSLOduration=0.92958674 podStartE2EDuration="929.58674ms" podCreationTimestamp="2025-10-26 15:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:17:42.929243762 +0000 UTC m=+6.316282851" watchObservedRunningTime="2025-10-26 15:17:42.92958674 +0000 UTC m=+6.316625821"
	Oct 26 15:17:47 no-preload-954807 kubelet[2007]: I1026 15:17:47.116942    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9grs2" podStartSLOduration=1.22202393 podStartE2EDuration="5.116921582s" podCreationTimestamp="2025-10-26 15:17:42 +0000 UTC" firstStartedPulling="2025-10-26 15:17:42.409297892 +0000 UTC m=+5.796336981" lastFinishedPulling="2025-10-26 15:17:46.304195553 +0000 UTC m=+9.691234633" observedRunningTime="2025-10-26 15:17:46.957026261 +0000 UTC m=+10.344065358" watchObservedRunningTime="2025-10-26 15:17:47.116921582 +0000 UTC m=+10.503960671"
	Oct 26 15:17:56 no-preload-954807 kubelet[2007]: I1026 15:17:56.966602    2007 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:17:57 no-preload-954807 kubelet[2007]: I1026 15:17:57.205804    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqv6\" (UniqueName: \"kubernetes.io/projected/7c8cb8b7-9202-4e22-bc6b-db89e79c7589-kube-api-access-ccqv6\") pod \"coredns-66bc5c9577-7xjmh\" (UID: \"7c8cb8b7-9202-4e22-bc6b-db89e79c7589\") " pod="kube-system/coredns-66bc5c9577-7xjmh"
	Oct 26 15:17:57 no-preload-954807 kubelet[2007]: I1026 15:17:57.205878    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jvz\" (UniqueName: \"kubernetes.io/projected/5cb08c14-ee23-4e69-b4b7-e5ef184ed78e-kube-api-access-d6jvz\") pod \"storage-provisioner\" (UID: \"5cb08c14-ee23-4e69-b4b7-e5ef184ed78e\") " pod="kube-system/storage-provisioner"
	Oct 26 15:17:57 no-preload-954807 kubelet[2007]: I1026 15:17:57.205932    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c8cb8b7-9202-4e22-bc6b-db89e79c7589-config-volume\") pod \"coredns-66bc5c9577-7xjmh\" (UID: \"7c8cb8b7-9202-4e22-bc6b-db89e79c7589\") " pod="kube-system/coredns-66bc5c9577-7xjmh"
	Oct 26 15:17:57 no-preload-954807 kubelet[2007]: I1026 15:17:57.205955    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5cb08c14-ee23-4e69-b4b7-e5ef184ed78e-tmp\") pod \"storage-provisioner\" (UID: \"5cb08c14-ee23-4e69-b4b7-e5ef184ed78e\") " pod="kube-system/storage-provisioner"
	Oct 26 15:17:57 no-preload-954807 kubelet[2007]: W1026 15:17:57.389173    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-f9f41582d2ecefceeeb0c684ec8c02a5622f394d52ed7d4dc9806313bcf1eddc WatchSource:0}: Error finding container f9f41582d2ecefceeeb0c684ec8c02a5622f394d52ed7d4dc9806313bcf1eddc: Status 404 returned error can't find the container with id f9f41582d2ecefceeeb0c684ec8c02a5622f394d52ed7d4dc9806313bcf1eddc
	Oct 26 15:17:57 no-preload-954807 kubelet[2007]: W1026 15:17:57.625709    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-67e0c62f4c1b6cdeccbea1e8b2e479a55c7dda8b93db9bf9c2c7f929cac3fb6a WatchSource:0}: Error finding container 67e0c62f4c1b6cdeccbea1e8b2e479a55c7dda8b93db9bf9c2c7f929cac3fb6a: Status 404 returned error can't find the container with id 67e0c62f4c1b6cdeccbea1e8b2e479a55c7dda8b93db9bf9c2c7f929cac3fb6a
	Oct 26 15:17:58 no-preload-954807 kubelet[2007]: I1026 15:17:58.049390    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.049368615 podStartE2EDuration="14.049368615s" podCreationTimestamp="2025-10-26 15:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:17:58.020516875 +0000 UTC m=+21.407555964" watchObservedRunningTime="2025-10-26 15:17:58.049368615 +0000 UTC m=+21.436407704"
	Oct 26 15:17:59 no-preload-954807 kubelet[2007]: I1026 15:17:59.006352    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7xjmh" podStartSLOduration=17.006324613 podStartE2EDuration="17.006324613s" podCreationTimestamp="2025-10-26 15:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:17:58.06496169 +0000 UTC m=+21.452000779" watchObservedRunningTime="2025-10-26 15:17:59.006324613 +0000 UTC m=+22.393363702"
	Oct 26 15:18:01 no-preload-954807 kubelet[2007]: I1026 15:18:01.137875    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkmc\" (UniqueName: \"kubernetes.io/projected/9a8dabc7-7557-4a48-8806-6fd5fee80256-kube-api-access-cxkmc\") pod \"busybox\" (UID: \"9a8dabc7-7557-4a48-8806-6fd5fee80256\") " pod="default/busybox"
	Oct 26 15:18:01 no-preload-954807 kubelet[2007]: W1026 15:18:01.387148    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e WatchSource:0}: Error finding container bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e: Status 404 returned error can't find the container with id bae4706c26a2b5ab501e5134c6a229e2adffd170aac05d76db5d78c5b443843e
	
	
	==> storage-provisioner [28f9f1fd3e59b275d994ea14cb68fc78890708db4b20d0eda7d16a4b0fc2de60] <==
	I1026 15:17:57.758475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:17:57.776205       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:17:57.776350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:17:57.779356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:57.788431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:17:57.788684       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:17:57.789017       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-954807_9f754d21-7a1a-4793-9be7-849ac3d427bc!
	I1026 15:17:57.793136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81e2b564-6d77-48d7-9a32-6c72ab01dcb0", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-954807_9f754d21-7a1a-4793-9be7-849ac3d427bc became leader
	W1026 15:17:57.793526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:57.801418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:17:57.891868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-954807_9f754d21-7a1a-4793-9be7-849ac3d427bc!
	W1026 15:17:59.804836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:17:59.811695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:01.814572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:01.819060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:03.822864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:03.828255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:05.833665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:05.839951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:07.844028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:07.850684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:09.855368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:18:09.863800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-954807 -n no-preload-954807
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-954807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-954807 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-954807 --alsologtostderr -v=1: exit status 80 (2.153289773s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-954807 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:19:34.529860  911650 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:19:34.530045  911650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:19:34.530069  911650 out.go:374] Setting ErrFile to fd 2...
	I1026 15:19:34.530085  911650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:19:34.530365  911650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:19:34.530648  911650 out.go:368] Setting JSON to false
	I1026 15:19:34.530701  911650 mustload.go:65] Loading cluster: no-preload-954807
	I1026 15:19:34.531107  911650 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:19:34.531613  911650 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:19:34.550578  911650 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:19:34.550911  911650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:19:34.611103  911650 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 15:19:34.601737607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:19:34.611806  911650 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-954807 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:19:34.615278  911650 out.go:179] * Pausing node no-preload-954807 ... 
	I1026 15:19:34.619072  911650 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:19:34.619436  911650 ssh_runner.go:195] Run: systemctl --version
	I1026 15:19:34.619497  911650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:19:34.637192  911650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:19:34.747544  911650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:19:34.760429  911650 pause.go:52] kubelet running: true
	I1026 15:19:34.760507  911650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:19:35.021487  911650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:19:35.021577  911650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:19:35.121510  911650 cri.go:89] found id: "2f1b442c63394a6e1e2d9967a43cfad768604badfe58c12bd0b44110c9f676b6"
	I1026 15:19:35.121533  911650 cri.go:89] found id: "00bf5ba9f6f7eb7ee174165b87d6143905a98c7e287e18bce58f41e656d7f5ef"
	I1026 15:19:35.121538  911650 cri.go:89] found id: "7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544"
	I1026 15:19:35.121541  911650 cri.go:89] found id: "3d0489895ef7987f8267922d4be82aea65bc786b1bc5d8331329f91f3b06f873"
	I1026 15:19:35.121545  911650 cri.go:89] found id: "752e98dc5d452109116989f3da58948224ad6572aecbb195926fc5bbad6b9f8c"
	I1026 15:19:35.121548  911650 cri.go:89] found id: "c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6"
	I1026 15:19:35.121551  911650 cri.go:89] found id: "cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d"
	I1026 15:19:35.121554  911650 cri.go:89] found id: "62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959"
	I1026 15:19:35.121557  911650 cri.go:89] found id: "1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4"
	I1026 15:19:35.121564  911650 cri.go:89] found id: "f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14"
	I1026 15:19:35.121567  911650 cri.go:89] found id: "821bf60d5210953702380bf2d035ceeea898a0c09c6c1ea9cb80ae3fc42d8fd0"
	I1026 15:19:35.121570  911650 cri.go:89] found id: ""
	I1026 15:19:35.121619  911650 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:19:35.145118  911650 retry.go:31] will retry after 349.707474ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:19:35Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:19:35.495637  911650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:19:35.512089  911650 pause.go:52] kubelet running: false
	I1026 15:19:35.512160  911650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:19:35.791004  911650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:19:35.791081  911650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:19:35.907519  911650 cri.go:89] found id: "2f1b442c63394a6e1e2d9967a43cfad768604badfe58c12bd0b44110c9f676b6"
	I1026 15:19:35.907541  911650 cri.go:89] found id: "00bf5ba9f6f7eb7ee174165b87d6143905a98c7e287e18bce58f41e656d7f5ef"
	I1026 15:19:35.907547  911650 cri.go:89] found id: "7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544"
	I1026 15:19:35.907552  911650 cri.go:89] found id: "3d0489895ef7987f8267922d4be82aea65bc786b1bc5d8331329f91f3b06f873"
	I1026 15:19:35.907555  911650 cri.go:89] found id: "752e98dc5d452109116989f3da58948224ad6572aecbb195926fc5bbad6b9f8c"
	I1026 15:19:35.907559  911650 cri.go:89] found id: "c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6"
	I1026 15:19:35.907568  911650 cri.go:89] found id: "cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d"
	I1026 15:19:35.907571  911650 cri.go:89] found id: "62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959"
	I1026 15:19:35.907585  911650 cri.go:89] found id: "1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4"
	I1026 15:19:35.907592  911650 cri.go:89] found id: "f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14"
	I1026 15:19:35.907595  911650 cri.go:89] found id: "821bf60d5210953702380bf2d035ceeea898a0c09c6c1ea9cb80ae3fc42d8fd0"
	I1026 15:19:35.907599  911650 cri.go:89] found id: ""
	I1026 15:19:35.907656  911650 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:19:35.920635  911650 retry.go:31] will retry after 299.859665ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:19:35Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:19:36.221030  911650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:19:36.240368  911650 pause.go:52] kubelet running: false
	I1026 15:19:36.240443  911650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:19:36.468906  911650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:19:36.468987  911650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:19:36.590894  911650 cri.go:89] found id: "2f1b442c63394a6e1e2d9967a43cfad768604badfe58c12bd0b44110c9f676b6"
	I1026 15:19:36.590916  911650 cri.go:89] found id: "00bf5ba9f6f7eb7ee174165b87d6143905a98c7e287e18bce58f41e656d7f5ef"
	I1026 15:19:36.590921  911650 cri.go:89] found id: "7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544"
	I1026 15:19:36.590925  911650 cri.go:89] found id: "3d0489895ef7987f8267922d4be82aea65bc786b1bc5d8331329f91f3b06f873"
	I1026 15:19:36.590928  911650 cri.go:89] found id: "752e98dc5d452109116989f3da58948224ad6572aecbb195926fc5bbad6b9f8c"
	I1026 15:19:36.590932  911650 cri.go:89] found id: "c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6"
	I1026 15:19:36.590936  911650 cri.go:89] found id: "cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d"
	I1026 15:19:36.590939  911650 cri.go:89] found id: "62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959"
	I1026 15:19:36.590942  911650 cri.go:89] found id: "1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4"
	I1026 15:19:36.590952  911650 cri.go:89] found id: "f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14"
	I1026 15:19:36.590956  911650 cri.go:89] found id: "821bf60d5210953702380bf2d035ceeea898a0c09c6c1ea9cb80ae3fc42d8fd0"
	I1026 15:19:36.590959  911650 cri.go:89] found id: ""
	I1026 15:19:36.591020  911650 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:19:36.609724  911650 out.go:203] 
	W1026 15:19:36.612659  911650 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:19:36.612769  911650 out.go:285] * 
	* 
	W1026 15:19:36.620075  911650 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:19:36.625219  911650 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-954807 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-954807
helpers_test.go:243: (dbg) docker inspect no-preload-954807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786",
	        "Created": "2025-10-26T15:16:49.517959935Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:18:25.253217969Z",
	            "FinishedAt": "2025-10-26T15:18:24.200414224Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/hostname",
	        "HostsPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/hosts",
	        "LogPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786-json.log",
	        "Name": "/no-preload-954807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-954807:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-954807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786",
	                "LowerDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-954807",
	                "Source": "/var/lib/docker/volumes/no-preload-954807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-954807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-954807",
	                "name.minikube.sigs.k8s.io": "no-preload-954807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a9c6558973af4da05d4687081ad321bee74f16a14068b20d7d0ef5c2e8a0476",
	            "SandboxKey": "/var/run/docker/netns/5a9c6558973a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33850"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-954807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:4e:fa:1c:63:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85855106e1f3577e90f02f145412c517c0b5aba224f5d8005b2109486b8acb25",
	                    "EndpointID": "0f69efc50d7f590a2dbc36762c032253ce6c4e2310767f3d876ad684eff54bfb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-954807",
	                        "974a34e5ba04"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807: exit status 2 (518.869408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-954807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-954807 logs -n 25: (1.820360581s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                          │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                             │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                          │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                              │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                             │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                    │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                    │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                              │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                               │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                              │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:18:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:18:24.873586  908785 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:18:24.873824  908785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:24.873852  908785 out.go:374] Setting ErrFile to fd 2...
	I1026 15:18:24.873873  908785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:24.874151  908785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:18:24.874543  908785 out.go:368] Setting JSON to false
	I1026 15:18:24.875517  908785 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18057,"bootTime":1761473848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:18:24.875610  908785 start.go:141] virtualization:  
	I1026 15:18:24.878798  908785 out.go:179] * [no-preload-954807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:18:24.882718  908785 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:18:24.882793  908785 notify.go:220] Checking for updates...
	I1026 15:18:24.886906  908785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:18:24.889801  908785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:24.892783  908785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:18:24.895757  908785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:18:24.898642  908785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:18:24.901948  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:24.902567  908785 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:18:24.948990  908785 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:18:24.949107  908785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:25.042150  908785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:18:25.031901314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:25.042260  908785 docker.go:318] overlay module found
	I1026 15:18:25.045300  908785 out.go:179] * Using the docker driver based on existing profile
	I1026 15:18:25.048156  908785 start.go:305] selected driver: docker
	I1026 15:18:25.048169  908785 start.go:925] validating driver "docker" against &{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:25.048276  908785 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:18:25.049069  908785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:25.141402  908785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:18:25.129156893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:25.141737  908785 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:25.141766  908785 cni.go:84] Creating CNI manager for ""
	I1026 15:18:25.141824  908785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:25.141856  908785 start.go:349] cluster config:
	{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:25.145012  908785 out.go:179] * Starting "no-preload-954807" primary control-plane node in "no-preload-954807" cluster
	I1026 15:18:25.147872  908785 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:18:25.150783  908785 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:18:25.153691  908785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:25.153844  908785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:18:25.154159  908785 cache.go:107] acquiring lock: {Name:mkbe2086c35e9fcbe8c03bdef4b41f05ca228154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154244  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:18:25.154253  908785 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.981µs
	I1026 15:18:25.154266  908785 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:18:25.154278  908785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:18:25.154523  908785 cache.go:107] acquiring lock: {Name:mk2325fad129f4b7d5aa09cccfdaa3da809a73fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154591  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:18:25.154599  908785 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 83.743µs
	I1026 15:18:25.154607  908785 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:18:25.154618  908785 cache.go:107] acquiring lock: {Name:mk54c57481d4cb891842b1b352451c8a69a47281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154662  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:18:25.154672  908785 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 56.033µs
	I1026 15:18:25.154686  908785 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:18:25.154696  908785 cache.go:107] acquiring lock: {Name:mk5a8cbd33cc84011ebd29296028bb78893eefc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154727  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:18:25.154731  908785 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 36.53µs
	I1026 15:18:25.154737  908785 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:18:25.154746  908785 cache.go:107] acquiring lock: {Name:mkaf3dfd27f1d15aad668c191c7cc85c71d2c9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154771  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:18:25.154776  908785 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.376µs
	I1026 15:18:25.154782  908785 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:18:25.154792  908785 cache.go:107] acquiring lock: {Name:mk964a36cda2ac1ad4a9006d14be02c6bd71c41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154916  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:18:25.154923  908785 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 131.685µs
	I1026 15:18:25.154929  908785 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:18:25.154963  908785 cache.go:107] acquiring lock: {Name:mkef4d9c96ab97f5a848fa8d925b343812fa37ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.155004  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:18:25.155014  908785 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 71.73µs
	I1026 15:18:25.155020  908785 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:18:25.155031  908785 cache.go:107] acquiring lock: {Name:mkc8d2557eb259bb5390e2f2db4396a6aec79411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.155060  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:18:25.155065  908785 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 35.389µs
	I1026 15:18:25.155076  908785 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:18:25.155087  908785 cache.go:87] Successfully saved all images to host disk.
	I1026 15:18:25.186482  908785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:18:25.186502  908785 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:18:25.186515  908785 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:18:25.186538  908785 start.go:360] acquireMachinesLock for no-preload-954807: {Name:mk3de11c10d64abd2c458c411445bde4bf32881c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.186600  908785 start.go:364] duration metric: took 46.409µs to acquireMachinesLock for "no-preload-954807"
	I1026 15:18:25.186620  908785 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:18:25.186626  908785 fix.go:54] fixHost starting: 
	I1026 15:18:25.186892  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:25.218587  908785 fix.go:112] recreateIfNeeded on no-preload-954807: state=Stopped err=<nil>
	W1026 15:18:25.218633  908785 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:18:23.824889  906105 out.go:252]   - Booting up control plane ...
	I1026 15:18:23.825002  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:23.825084  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:23.826750  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:23.843130  906105 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:23.843590  906105 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:23.851900  906105 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:23.852216  906105 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:23.852513  906105 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:24.001209  906105 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:24.001367  906105 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:25.996925  906105 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000832133s
	I1026 15:18:26.000302  906105 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:26.000400  906105 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1026 15:18:26.000511  906105 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:26.000594  906105 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:25.221939  908785 out.go:252] * Restarting existing docker container for "no-preload-954807" ...
	I1026 15:18:25.222028  908785 cli_runner.go:164] Run: docker start no-preload-954807
	I1026 15:18:25.539012  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:25.573935  908785 kic.go:430] container "no-preload-954807" state is running.
	I1026 15:18:25.574383  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:25.603715  908785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:18:25.604226  908785 machine.go:93] provisionDockerMachine start ...
	I1026 15:18:25.604316  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:25.634297  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:25.634626  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:25.634636  908785 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:18:25.636397  908785 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:18:28.841282  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:18:28.841360  908785 ubuntu.go:182] provisioning hostname "no-preload-954807"
	I1026 15:18:28.841444  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:28.866436  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:28.866762  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:28.866774  908785 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-954807 && echo "no-preload-954807" | sudo tee /etc/hostname
	I1026 15:18:29.069155  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:18:29.069302  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:29.098780  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:29.099104  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:29.099122  908785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-954807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-954807/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-954807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:18:29.276929  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:29.276952  908785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:18:29.276983  908785 ubuntu.go:190] setting up certificates
	I1026 15:18:29.276993  908785 provision.go:84] configureAuth start
	I1026 15:18:29.277060  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:29.299794  908785 provision.go:143] copyHostCerts
	I1026 15:18:29.299860  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:18:29.299879  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:18:29.299957  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:18:29.300067  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:18:29.300072  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:18:29.300099  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:18:29.300159  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:18:29.300168  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:18:29.300193  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:18:29.300245  908785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.no-preload-954807 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-954807]
	I1026 15:18:30.781617  906105 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.780617084s
	I1026 15:18:29.899785  908785 provision.go:177] copyRemoteCerts
	I1026 15:18:29.899900  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:18:29.899970  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:29.942702  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:30.078143  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:18:30.113207  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:18:30.146061  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:18:30.178703  908785 provision.go:87] duration metric: took 901.687509ms to configureAuth
	I1026 15:18:30.178771  908785 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:18:30.178995  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:30.179148  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.207087  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:30.207408  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:30.207425  908785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:18:30.676969  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:18:30.677026  908785 machine.go:96] duration metric: took 5.072780445s to provisionDockerMachine
	I1026 15:18:30.677052  908785 start.go:293] postStartSetup for "no-preload-954807" (driver="docker")
	I1026 15:18:30.677077  908785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:18:30.677149  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:18:30.677252  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.710413  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:30.823871  908785 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:18:30.827555  908785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:18:30.827587  908785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:18:30.827599  908785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:18:30.827656  908785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:18:30.827744  908785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:18:30.827864  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:18:30.838700  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:18:30.871356  908785 start.go:296] duration metric: took 194.275536ms for postStartSetup
	I1026 15:18:30.871461  908785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:18:30.871518  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.902387  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.034591  908785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:18:31.045225  908785 fix.go:56] duration metric: took 5.858591617s for fixHost
	I1026 15:18:31.045253  908785 start.go:83] releasing machines lock for "no-preload-954807", held for 5.85864381s
	I1026 15:18:31.045332  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:31.106399  908785 ssh_runner.go:195] Run: cat /version.json
	I1026 15:18:31.106456  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:31.106711  908785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:18:31.106777  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:31.151426  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.158586  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.396049  908785 ssh_runner.go:195] Run: systemctl --version
	I1026 15:18:31.403261  908785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:18:31.469937  908785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:18:31.482908  908785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:18:31.483041  908785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:18:31.493995  908785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:18:31.494066  908785 start.go:495] detecting cgroup driver to use...
	I1026 15:18:31.494113  908785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:18:31.494187  908785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:18:31.521177  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:18:31.541265  908785 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:18:31.541370  908785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:18:31.569119  908785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:18:31.584298  908785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:18:31.790771  908785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:18:32.003146  908785 docker.go:234] disabling docker service ...
	I1026 15:18:32.003270  908785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:18:32.027531  908785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:18:32.052390  908785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:18:32.244277  908785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:18:32.429463  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:18:32.445776  908785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:18:32.465349  908785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:18:32.465428  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.478857  908785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:18:32.478978  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.488961  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.499025  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.509768  908785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:18:32.519485  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.529990  908785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.539869  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.550905  908785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:18:32.559187  908785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:18:32.568293  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:32.731012  908785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:18:32.890143  908785 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:18:32.890243  908785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:18:32.895296  908785 start.go:563] Will wait 60s for crictl version
	I1026 15:18:32.895370  908785 ssh_runner.go:195] Run: which crictl
	I1026 15:18:32.899632  908785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:18:32.959445  908785 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:18:32.959551  908785 ssh_runner.go:195] Run: crio --version
	I1026 15:18:32.999198  908785 ssh_runner.go:195] Run: crio --version
	I1026 15:18:33.053114  908785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:18:32.381923  906105 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.381595886s
	I1026 15:18:34.004615  906105 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.004015537s
	I1026 15:18:34.039440  906105 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:34.060957  906105 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:34.093820  906105 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:34.094029  906105 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-494684 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:34.116373  906105 kubeadm.go:318] [bootstrap-token] Using token: opo3lq.zbfbsr53k4i0zecq
	I1026 15:18:33.056258  908785 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:18:33.077802  908785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:18:33.083627  908785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:33.094756  908785 kubeadm.go:883] updating cluster {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:18:33.094867  908785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:33.094911  908785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:33.140777  908785 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:18:33.140799  908785 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:18:33.140815  908785 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:18:33.140916  908785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-954807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:18:33.140993  908785 ssh_runner.go:195] Run: crio config
	I1026 15:18:33.234362  908785 cni.go:84] Creating CNI manager for ""
	I1026 15:18:33.234382  908785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:33.234396  908785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:18:33.234442  908785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-954807 NodeName:no-preload-954807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:18:33.234611  908785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-954807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:18:33.234704  908785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:18:33.244949  908785 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:18:33.245042  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:18:33.252734  908785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:18:33.266334  908785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:18:33.280280  908785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:18:33.300014  908785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:18:33.305316  908785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:33.315583  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:33.467826  908785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:33.491186  908785 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807 for IP: 192.168.85.2
	I1026 15:18:33.491220  908785 certs.go:195] generating shared ca certs ...
	I1026 15:18:33.491258  908785 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:33.491442  908785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:18:33.491517  908785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:18:33.491547  908785 certs.go:257] generating profile certs ...
	I1026 15:18:33.491665  908785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key
	I1026 15:18:33.491771  908785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805
	I1026 15:18:33.491845  908785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key
	I1026 15:18:33.492003  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:18:33.492056  908785 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:18:33.492084  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:18:33.492115  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:18:33.492158  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:18:33.492198  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:18:33.492264  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:18:33.493002  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:18:33.513517  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:18:33.532884  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:18:33.555231  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:18:33.579754  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:18:33.602447  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:18:33.628293  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:18:33.684754  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:18:33.753264  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:18:33.821238  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:18:33.843108  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:18:33.862371  908785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:18:33.878516  908785 ssh_runner.go:195] Run: openssl version
	I1026 15:18:33.885509  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:18:33.895167  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.900931  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.901140  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.967665  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:18:33.976773  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:18:33.985438  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:18:33.990423  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:18:33.990496  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:18:34.052535  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:18:34.062937  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:18:34.072240  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.076658  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.076793  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.127445  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:18:34.136993  908785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:18:34.141905  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:18:34.197715  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:18:34.255022  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:18:34.321728  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:18:34.389895  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:18:34.548526  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:18:34.681856  908785 kubeadm.go:400] StartCluster: {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:34.681971  908785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:18:34.682063  908785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:18:34.783414  908785 cri.go:89] found id: "c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6"
	I1026 15:18:34.783566  908785 cri.go:89] found id: "cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d"
	I1026 15:18:34.783587  908785 cri.go:89] found id: "62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959"
	I1026 15:18:34.783621  908785 cri.go:89] found id: "1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4"
	I1026 15:18:34.783639  908785 cri.go:89] found id: ""
	I1026 15:18:34.783719  908785 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:18:34.811816  908785 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:18:34.812057  908785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:18:34.827966  908785 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:18:34.828092  908785 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:18:34.828177  908785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:18:34.843255  908785 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:18:34.843698  908785 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-954807" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:34.843791  908785 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-954807" cluster setting kubeconfig missing "no-preload-954807" context setting]
	I1026 15:18:34.844059  908785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.845642  908785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:18:34.871634  908785 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 15:18:34.871666  908785 kubeadm.go:601] duration metric: took 43.554458ms to restartPrimaryControlPlane
	I1026 15:18:34.871675  908785 kubeadm.go:402] duration metric: took 189.829653ms to StartCluster
	I1026 15:18:34.871690  908785 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.871749  908785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:34.872330  908785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.872519  908785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:34.873018  908785 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:34.873111  908785 addons.go:69] Setting storage-provisioner=true in profile "no-preload-954807"
	I1026 15:18:34.873126  908785 addons.go:238] Setting addon storage-provisioner=true in "no-preload-954807"
	W1026 15:18:34.873137  908785 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:18:34.873163  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.873189  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:34.873264  908785 addons.go:69] Setting dashboard=true in profile "no-preload-954807"
	I1026 15:18:34.873297  908785 addons.go:238] Setting addon dashboard=true in "no-preload-954807"
	W1026 15:18:34.873336  908785 addons.go:247] addon dashboard should already be in state true
	I1026 15:18:34.873368  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.873660  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.877946  908785 addons.go:69] Setting default-storageclass=true in profile "no-preload-954807"
	I1026 15:18:34.878023  908785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-954807"
	I1026 15:18:34.877565  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.878787  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.877575  908785 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:34.888833  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:34.921307  908785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:18:34.925761  908785 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:34.925783  908785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:34.925866  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:34.941729  908785 addons.go:238] Setting addon default-storageclass=true in "no-preload-954807"
	W1026 15:18:34.941762  908785 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:18:34.941790  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.942216  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.950093  908785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:18:34.956801  908785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:18:34.119508  906105 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:34.119644  906105 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:34.125645  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:34.136618  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:34.144003  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:34.155143  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:34.162423  906105 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:34.413457  906105 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:35.074961  906105 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:35.413379  906105 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:35.414997  906105 kubeadm.go:318] 
	I1026 15:18:35.415072  906105 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:35.415078  906105 kubeadm.go:318] 
	I1026 15:18:35.415155  906105 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:35.415160  906105 kubeadm.go:318] 
	I1026 15:18:35.415185  906105 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:35.419772  906105 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:35.419856  906105 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:35.419905  906105 kubeadm.go:318] 
	I1026 15:18:35.420002  906105 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:35.420007  906105 kubeadm.go:318] 
	I1026 15:18:35.420066  906105 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:35.420070  906105 kubeadm.go:318] 
	I1026 15:18:35.420148  906105 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:35.420235  906105 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:35.420314  906105 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:35.420324  906105 kubeadm.go:318] 
	I1026 15:18:35.420408  906105 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:35.420488  906105 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:35.420497  906105 kubeadm.go:318] 
	I1026 15:18:35.420612  906105 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token opo3lq.zbfbsr53k4i0zecq \
	I1026 15:18:35.420744  906105 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:18:35.420794  906105 kubeadm.go:318] 	--control-plane 
	I1026 15:18:35.420800  906105 kubeadm.go:318] 
	I1026 15:18:35.420895  906105 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:35.420900  906105 kubeadm.go:318] 
	I1026 15:18:35.420998  906105 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token opo3lq.zbfbsr53k4i0zecq \
	I1026 15:18:35.421110  906105 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:18:35.440042  906105 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:18:35.440280  906105 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:18:35.440391  906105 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:35.440407  906105 cni.go:84] Creating CNI manager for ""
	I1026 15:18:35.440414  906105 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:35.444207  906105 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:18:35.447185  906105 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:18:35.456310  906105 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:18:35.456334  906105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:18:35.507388  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:18:36.090917  906105 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:36.091006  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:36.091050  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-494684 minikube.k8s.io/updated_at=2025_10_26T15_18_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=default-k8s-diff-port-494684 minikube.k8s.io/primary=true
	I1026 15:18:36.514936  906105 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:36.515052  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:37.015410  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:37.515116  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:38.015362  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:38.515615  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:39.015108  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:39.233922  906105 kubeadm.go:1113] duration metric: took 3.142974166s to wait for elevateKubeSystemPrivileges
	I1026 15:18:39.233954  906105 kubeadm.go:402] duration metric: took 23.046817686s to StartCluster
	I1026 15:18:39.233975  906105 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:39.234032  906105 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:39.235069  906105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:39.235311  906105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:18:39.235322  906105 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:39.235586  906105 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:39.235621  906105 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:39.235684  906105 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-494684"
	I1026 15:18:39.235698  906105 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-494684"
	I1026 15:18:39.235723  906105 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:18:39.236178  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.236758  906105 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-494684"
	I1026 15:18:39.236781  906105 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-494684"
	I1026 15:18:39.237117  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.240502  906105 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:39.252908  906105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:39.270053  906105 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-494684"
	I1026 15:18:39.270095  906105 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:18:39.270522  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.282068  906105 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:18:34.959584  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:18:34.959611  908785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:18:34.959687  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:34.980722  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:34.990491  908785 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:34.990523  908785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:34.990600  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:35.026564  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:35.044932  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:35.366750  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:18:35.366822  908785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:18:35.430297  908785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:35.447981  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:35.526736  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:18:35.526816  908785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:18:35.541300  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:35.557089  908785 node_ready.go:35] waiting up to 6m0s for node "no-preload-954807" to be "Ready" ...
	I1026 15:18:35.640785  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:18:35.640819  908785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:18:35.771188  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:18:35.771215  908785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:18:35.825305  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:18:35.825332  908785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:18:35.945173  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:18:35.945241  908785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:18:36.043908  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:18:36.043985  908785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:18:36.074085  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:18:36.074164  908785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:18:36.114626  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:18:36.114697  908785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:18:36.162322  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:18:39.285064  906105 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:39.285091  906105 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:39.285174  906105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:18:39.313693  906105 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:39.313726  906105 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:39.313788  906105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:18:39.329825  906105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:18:39.352237  906105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:18:39.833145  906105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:18:39.835130  906105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:39.865906  906105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:39.891557  906105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:41.038716  906105 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.205461428s)
	I1026 15:18:41.038845  906105 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 15:18:41.038811  906105 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203616764s)
	I1026 15:18:41.039823  906105 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:18:41.560927  906105 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-494684" context rescaled to 1 replicas
	I1026 15:18:41.767543  906105 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.901592798s)
	I1026 15:18:41.767597  906105 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.875965389s)
	I1026 15:18:41.789656  906105 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:18:41.792471  906105 addons.go:514] duration metric: took 2.556838269s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:18:42.009830  908785 node_ready.go:49] node "no-preload-954807" is "Ready"
	I1026 15:18:42.009866  908785 node_ready.go:38] duration metric: took 6.452696965s for node "no-preload-954807" to be "Ready" ...
	I1026 15:18:42.009885  908785 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:42.009955  908785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:44.074337  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.626316807s)
	I1026 15:18:44.074430  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.533054521s)
	I1026 15:18:44.093634  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.931220048s)
	I1026 15:18:44.093821  908785 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.083847423s)
	I1026 15:18:44.093842  908785 api_server.go:72] duration metric: took 9.221303285s to wait for apiserver process to appear ...
	I1026 15:18:44.093849  908785 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:44.093871  908785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:18:44.096535  908785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-954807 addons enable metrics-server
	
	I1026 15:18:44.100991  908785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:18:44.103937  908785 addons.go:514] duration metric: took 9.230903875s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:18:44.105206  908785 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:18:44.106296  908785 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:44.106318  908785 api_server.go:131] duration metric: took 12.458566ms to wait for apiserver health ...
	I1026 15:18:44.106327  908785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:44.109695  908785 system_pods.go:59] 8 kube-system pods found
	I1026 15:18:44.109733  908785 system_pods.go:61] "coredns-66bc5c9577-7xjmh" [7c8cb8b7-9202-4e22-bc6b-db89e79c7589] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:44.109742  908785 system_pods.go:61] "etcd-no-preload-954807" [52c031cf-4dde-4c04-8883-80b3a9be7df3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:44.109750  908785 system_pods.go:61] "kindnet-9grs2" [24f115af-1173-42c3-a38d-af5044b515d6] Running
	I1026 15:18:44.109757  908785 system_pods.go:61] "kube-apiserver-no-preload-954807" [19b0fdfa-be5b-4363-91e4-5e49e816a746] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:44.109764  908785 system_pods.go:61] "kube-controller-manager-no-preload-954807" [cd19e3f8-151b-4b3e-b857-571a59f57f44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:44.109768  908785 system_pods.go:61] "kube-proxy-q8nns" [f407a5bf-332b-4393-8250-e22d40da01f9] Running
	I1026 15:18:44.109775  908785 system_pods.go:61] "kube-scheduler-no-preload-954807" [ddb87e7c-a779-4c46-b2af-bfe48e908828] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:44.109780  908785 system_pods.go:61] "storage-provisioner" [5cb08c14-ee23-4e69-b4b7-e5ef184ed78e] Running
	I1026 15:18:44.109786  908785 system_pods.go:74] duration metric: took 3.453281ms to wait for pod list to return data ...
	I1026 15:18:44.109794  908785 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:44.112368  908785 default_sa.go:45] found service account: "default"
	I1026 15:18:44.112388  908785 default_sa.go:55] duration metric: took 2.586901ms for default service account to be created ...
	I1026 15:18:44.112396  908785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:44.115134  908785 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:44.115216  908785 system_pods.go:89] "coredns-66bc5c9577-7xjmh" [7c8cb8b7-9202-4e22-bc6b-db89e79c7589] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:44.115250  908785 system_pods.go:89] "etcd-no-preload-954807" [52c031cf-4dde-4c04-8883-80b3a9be7df3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:44.115283  908785 system_pods.go:89] "kindnet-9grs2" [24f115af-1173-42c3-a38d-af5044b515d6] Running
	I1026 15:18:44.115306  908785 system_pods.go:89] "kube-apiserver-no-preload-954807" [19b0fdfa-be5b-4363-91e4-5e49e816a746] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:44.115328  908785 system_pods.go:89] "kube-controller-manager-no-preload-954807" [cd19e3f8-151b-4b3e-b857-571a59f57f44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:44.115352  908785 system_pods.go:89] "kube-proxy-q8nns" [f407a5bf-332b-4393-8250-e22d40da01f9] Running
	I1026 15:18:44.115383  908785 system_pods.go:89] "kube-scheduler-no-preload-954807" [ddb87e7c-a779-4c46-b2af-bfe48e908828] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:44.115402  908785 system_pods.go:89] "storage-provisioner" [5cb08c14-ee23-4e69-b4b7-e5ef184ed78e] Running
	I1026 15:18:44.115424  908785 system_pods.go:126] duration metric: took 3.020964ms to wait for k8s-apps to be running ...
	I1026 15:18:44.115449  908785 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:44.115528  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:44.132618  908785 system_svc.go:56] duration metric: took 17.163659ms WaitForService to wait for kubelet
	I1026 15:18:44.132642  908785 kubeadm.go:586] duration metric: took 9.260101546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:44.132663  908785 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:44.135549  908785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:18:44.135577  908785 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:44.135589  908785 node_conditions.go:105] duration metric: took 2.919573ms to run NodePressure ...
	I1026 15:18:44.135602  908785 start.go:241] waiting for startup goroutines ...
	I1026 15:18:44.135610  908785 start.go:246] waiting for cluster config update ...
	I1026 15:18:44.135620  908785 start.go:255] writing updated cluster config ...
	I1026 15:18:44.135912  908785 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:44.139910  908785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:44.143746  908785 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7xjmh" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:43.043031  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:45.079469  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:46.199968  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:48.651556  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:47.542993  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:50.043710  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:52.043878  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:51.150539  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:53.150747  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:54.543184  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:57.043649  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:55.650612  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:57.655226  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:59.545897  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:02.043891  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:00.154271  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:02.649805  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:04.650562  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:04.543488  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:07.043487  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:07.149715  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:09.650530  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:09.542582  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:11.543176  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:12.149228  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:14.157707  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:14.043223  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:16.043564  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:16.651190  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:19.150299  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:18.543877  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:21.042737  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	I1026 15:19:21.149393  908785 pod_ready.go:94] pod "coredns-66bc5c9577-7xjmh" is "Ready"
	I1026 15:19:21.149423  908785 pod_ready.go:86] duration metric: took 37.005599421s for pod "coredns-66bc5c9577-7xjmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.152545  908785 pod_ready.go:83] waiting for pod "etcd-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.157626  908785 pod_ready.go:94] pod "etcd-no-preload-954807" is "Ready"
	I1026 15:19:21.157652  908785 pod_ready.go:86] duration metric: took 5.07725ms for pod "etcd-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.160404  908785 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.165211  908785 pod_ready.go:94] pod "kube-apiserver-no-preload-954807" is "Ready"
	I1026 15:19:21.165241  908785 pod_ready.go:86] duration metric: took 4.811401ms for pod "kube-apiserver-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.171007  908785 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.347722  908785 pod_ready.go:94] pod "kube-controller-manager-no-preload-954807" is "Ready"
	I1026 15:19:21.347751  908785 pod_ready.go:86] duration metric: took 176.720385ms for pod "kube-controller-manager-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.547746  908785 pod_ready.go:83] waiting for pod "kube-proxy-q8nns" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.947461  908785 pod_ready.go:94] pod "kube-proxy-q8nns" is "Ready"
	I1026 15:19:21.947490  908785 pod_ready.go:86] duration metric: took 399.680606ms for pod "kube-proxy-q8nns" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.147722  908785 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.548568  908785 pod_ready.go:94] pod "kube-scheduler-no-preload-954807" is "Ready"
	I1026 15:19:22.548648  908785 pod_ready.go:86] duration metric: took 400.89538ms for pod "kube-scheduler-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.548677  908785 pod_ready.go:40] duration metric: took 38.40866909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:22.645734  908785 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:19:22.649853  908785 out.go:179] * Done! kubectl is now configured to use "no-preload-954807" cluster and "default" namespace by default
	I1026 15:19:22.543199  906105 node_ready.go:49] node "default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:22.543232  906105 node_ready.go:38] duration metric: took 41.503374902s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:19:22.543247  906105 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:19:22.543322  906105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:19:22.558433  906105 api_server.go:72] duration metric: took 43.323081637s to wait for apiserver process to appear ...
	I1026 15:19:22.558456  906105 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:19:22.558476  906105 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1026 15:19:22.574126  906105 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1026 15:19:22.576212  906105 api_server.go:141] control plane version: v1.34.1
	I1026 15:19:22.576245  906105 api_server.go:131] duration metric: took 17.782398ms to wait for apiserver health ...
	I1026 15:19:22.576254  906105 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:19:22.585670  906105 system_pods.go:59] 8 kube-system pods found
	I1026 15:19:22.585709  906105 system_pods.go:61] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.585717  906105 system_pods.go:61] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.585725  906105 system_pods.go:61] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.585730  906105 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.585736  906105 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.585746  906105 system_pods.go:61] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.585754  906105 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.585761  906105 system_pods.go:61] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.585776  906105 system_pods.go:74] duration metric: took 9.51752ms to wait for pod list to return data ...
	I1026 15:19:22.585789  906105 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:19:22.589014  906105 default_sa.go:45] found service account: "default"
	I1026 15:19:22.589043  906105 default_sa.go:55] duration metric: took 3.244286ms for default service account to be created ...
	I1026 15:19:22.589054  906105 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:19:22.597482  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:22.597521  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.597529  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.597536  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.597541  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.597546  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.597551  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.597557  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.597566  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.597592  906105 retry.go:31] will retry after 270.93989ms: missing components: kube-dns
	I1026 15:19:22.890382  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:22.890422  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.890429  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.890436  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.890442  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.890447  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.890454  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.890458  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.890466  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.890480  906105 retry.go:31] will retry after 311.29252ms: missing components: kube-dns
	I1026 15:19:23.207300  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.207338  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:23.207345  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.207352  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.207356  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.207360  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.207365  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.207369  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.207375  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:23.207394  906105 retry.go:31] will retry after 338.060587ms: missing components: kube-dns
	I1026 15:19:23.549179  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.549216  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:23.549224  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.549231  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.549235  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.549239  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.549244  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.549248  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.549254  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:23.549269  906105 retry.go:31] will retry after 395.592761ms: missing components: kube-dns
	I1026 15:19:23.949803  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.949839  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Running
	I1026 15:19:23.949846  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.949854  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.949861  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.949866  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.949874  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.949879  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.949884  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Running
	I1026 15:19:23.949892  906105 system_pods.go:126] duration metric: took 1.360831952s to wait for k8s-apps to be running ...
	I1026 15:19:23.949905  906105 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:19:23.949966  906105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:19:23.964269  906105 system_svc.go:56] duration metric: took 14.355022ms WaitForService to wait for kubelet
	I1026 15:19:23.964297  906105 kubeadm.go:586] duration metric: took 44.728950966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:19:23.964316  906105 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:19:23.967634  906105 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:19:23.967669  906105 node_conditions.go:123] node cpu capacity is 2
	I1026 15:19:23.967684  906105 node_conditions.go:105] duration metric: took 3.327873ms to run NodePressure ...
	I1026 15:19:23.967696  906105 start.go:241] waiting for startup goroutines ...
	I1026 15:19:23.967745  906105 start.go:246] waiting for cluster config update ...
	I1026 15:19:23.967757  906105 start.go:255] writing updated cluster config ...
	I1026 15:19:23.968071  906105 ssh_runner.go:195] Run: rm -f paused
	I1026 15:19:23.972391  906105 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:23.978846  906105 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.984309  906105 pod_ready.go:94] pod "coredns-66bc5c9577-zm8vb" is "Ready"
	I1026 15:19:23.984341  906105 pod_ready.go:86] duration metric: took 5.466432ms for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.987133  906105 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.992540  906105 pod_ready.go:94] pod "etcd-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:23.992578  906105 pod_ready.go:86] duration metric: took 5.419399ms for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.995145  906105 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.999951  906105 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:23.999979  906105 pod_ready.go:86] duration metric: took 4.806707ms for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.003124  906105 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.376257  906105 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:24.376287  906105 pod_ready.go:86] duration metric: took 373.130356ms for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.576393  906105 pod_ready.go:83] waiting for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.976974  906105 pod_ready.go:94] pod "kube-proxy-nbcd6" is "Ready"
	I1026 15:19:24.977002  906105 pod_ready.go:86] duration metric: took 400.540602ms for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.178270  906105 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.576418  906105 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:25.576444  906105 pod_ready.go:86] duration metric: took 398.150209ms for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.576456  906105 pod_ready.go:40] duration metric: took 1.604033075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:25.629832  906105 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:19:25.636667  906105 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-494684" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.786836377Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.790327623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.790358885Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.79038204Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.794256864Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.79428962Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.794306079Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.810536412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.810573105Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.81059786Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.814234651Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.814272912Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.825280743Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=146866d4-6e05-44f7-81aa-0d0a7f71b1a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.826200195Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2c14f395-a4dd-4f86-a40b-8aa3f6b203a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.827309048Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper" id=25d866a9-6224-4498-a8a8-6a3d1298d072 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.827445312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.834846181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.835386208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.862691878Z" level=info msg="Created container f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper" id=25d866a9-6224-4498-a8a8-6a3d1298d072 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.863786881Z" level=info msg="Starting container: f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14" id=bf9cb6ab-e5c8-4e6d-afc4-77a6bca3b3ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.865782103Z" level=info msg="Started container" PID=1715 containerID=f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper id=bf9cb6ab-e5c8-4e6d-afc4-77a6bca3b3ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be
	Oct 26 15:19:29 no-preload-954807 conmon[1713]: conmon f04c6bfff6203a1a10d4 <ninfo>: container 1715 exited with status 1
	Oct 26 15:19:30 no-preload-954807 crio[648]: time="2025-10-26T15:19:30.098232372Z" level=info msg="Removing container: d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245" id=c6401641-0ff1-4364-8cbb-e06b2e2bebc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:19:30 no-preload-954807 crio[648]: time="2025-10-26T15:19:30.115729943Z" level=info msg="Error loading conmon cgroup of container d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245: cgroup deleted" id=c6401641-0ff1-4364-8cbb-e06b2e2bebc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:19:30 no-preload-954807 crio[648]: time="2025-10-26T15:19:30.119319783Z" level=info msg="Removed container d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper" id=c6401641-0ff1-4364-8cbb-e06b2e2bebc2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f04c6bfff6203       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   1ab8e350af3c6       dashboard-metrics-scraper-6ffb444bf9-s2lnr   kubernetes-dashboard
	2f1b442c63394       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   20b3c25034a57       storage-provisioner                          kube-system
	821bf60d52109       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   54ee650019adb       kubernetes-dashboard-855c9754f9-mns4v        kubernetes-dashboard
	00bf5ba9f6f7e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   f3c4e4d8a5fad       coredns-66bc5c9577-7xjmh                     kube-system
	15ddf611b7c06       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   483b5c42d101e       busybox                                      default
	7f2f05ce22257       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   20b3c25034a57       storage-provisioner                          kube-system
	3d0489895ef79       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   e51963a248cfe       kindnet-9grs2                                kube-system
	752e98dc5d452       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   693fdf4148b6d       kube-proxy-q8nns                             kube-system
	c4a70523738c5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fd900d9d1df8b       kube-controller-manager-no-preload-954807    kube-system
	cb2dbcb5faf83       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a854f42143668       kube-scheduler-no-preload-954807             kube-system
	62ad6fae814dc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   70d4de6a1280e       etcd-no-preload-954807                       kube-system
	1eb364639f4fd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6bc72f1ad37fc       kube-apiserver-no-preload-954807             kube-system
	
	
	==> coredns [00bf5ba9f6f7eb7ee174165b87d6143905a98c7e287e18bce58f41e656d7f5ef] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50782 - 54189 "HINFO IN 2416344462135623600.4311425446834346707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026133145s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-954807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-954807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=no-preload-954807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_17_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-954807
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:19:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-954807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c4720016-79cb-477b-b38d-c7121463d568
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-7xjmh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-no-preload-954807                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-9grs2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-954807              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-954807     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-q8nns                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-954807              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s2lnr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mns4v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 115s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   Starting                 2m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m1s                 kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s                 kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m1s                 kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           117s                 node-controller  Node no-preload-954807 event: Registered Node no-preload-954807 in Controller
	  Normal   NodeReady                102s                 kubelet          Node no-preload-954807 status is now: NodeReady
	  Normal   Starting                 65s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)    kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)    kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)    kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-954807 event: Registered Node no-preload-954807 in Controller
	
	
	==> dmesg <==
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959] <==
	{"level":"warn","ts":"2025-10-26T15:18:38.486945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.520094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.573258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.620640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.679794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.704511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.728904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.749070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.767539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.783697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.807548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.837664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.868243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.886187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.940642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.969012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.998920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.072064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.099677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.114034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.183711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.231379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.308304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.367928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.693944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50596","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:19:38 up  5:02,  0 user,  load average: 3.45, 3.55, 3.09
	Linux no-preload-954807 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3d0489895ef7987f8267922d4be82aea65bc786b1bc5d8331329f91f3b06f873] <==
	I1026 15:18:43.560744       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:18:43.560977       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:18:43.565489       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:18:43.565519       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:18:43.565536       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:18:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:18:43.786616       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:18:43.786647       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:18:43.786659       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:18:43.787443       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:19:13.780646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:19:13.787318       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:19:13.787418       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:19:13.787316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:19:15.087497       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:19:15.087537       1 metrics.go:72] Registering metrics
	I1026 15:19:15.087602       1 controller.go:711] "Syncing nftables rules"
	I1026 15:19:23.782355       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:19:23.782480       1 main.go:301] handling current node
	I1026 15:19:33.781202       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:19:33.781231       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4] <==
	I1026 15:18:42.038031       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:18:42.135223       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:18:42.145892       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:18:42.186405       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:18:42.186458       1 policy_source.go:240] refreshing policies
	I1026 15:18:42.256928       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:18:42.266425       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:18:42.293223       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:18:42.293316       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:42.293359       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 15:18:42.293385       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:18:42.293392       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:18:42.294640       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1026 15:18:42.360349       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:18:42.728489       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:18:42.765106       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:18:43.483518       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:18:43.652077       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:18:43.789072       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:18:43.857151       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:18:44.027943       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.141.213"}
	I1026 15:18:44.084957       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.143.214"}
	I1026 15:18:46.545989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:18:46.644429       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:18:46.744315       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6] <==
	I1026 15:18:46.250274       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:18:46.254537       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:18:46.257786       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:18:46.259088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:18:46.259174       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:18:46.283063       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:18:46.283064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:18:46.283118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:18:46.285383       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:18:46.287671       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:18:46.287686       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:18:46.288343       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:18:46.288410       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:18:46.289604       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:18:46.290767       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:18:46.290847       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:18:46.290915       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-954807"
	I1026 15:18:46.290958       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:18:46.292963       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:18:46.294035       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:18:46.317977       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:18:46.318006       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:18:46.318013       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:18:46.318153       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:18:46.321373       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [752e98dc5d452109116989f3da58948224ad6572aecbb195926fc5bbad6b9f8c] <==
	I1026 15:18:44.013860       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:18:44.214429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:18:44.315014       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:18:44.315145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:18:44.315260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:18:44.349331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:18:44.349443       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:18:44.353320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:18:44.353654       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:18:44.353829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:44.355242       1 config.go:200] "Starting service config controller"
	I1026 15:18:44.355295       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:18:44.355337       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:18:44.355363       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:18:44.355396       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:18:44.355422       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:18:44.356123       1 config.go:309] "Starting node config controller"
	I1026 15:18:44.356173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:18:44.356201       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:18:44.456955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:18:44.457075       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:18:44.457137       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d] <==
	I1026 15:18:39.993967       1 serving.go:386] Generated self-signed cert in-memory
	I1026 15:18:44.568145       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:18:44.570784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:44.576956       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 15:18:44.577008       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 15:18:44.577045       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:18:44.577063       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:18:44.577086       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:18:44.577100       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:18:44.577326       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:18:44.577428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:18:44.677910       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 15:18:44.677991       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:18:44.678052       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:18:46 no-preload-954807 kubelet[766]: I1026 15:18:46.892420     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/89db4534-81ce-41d2-b3fa-771b17a5d05b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mns4v\" (UID: \"89db4534-81ce-41d2-b3fa-771b17a5d05b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mns4v"
	Oct 26 15:18:47 no-preload-954807 kubelet[766]: W1026 15:18:47.210691     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-54ee650019adb7702a25890b33146fdc18973d0406c356054b844e33faf1aaad WatchSource:0}: Error finding container 54ee650019adb7702a25890b33146fdc18973d0406c356054b844e33faf1aaad: Status 404 returned error can't find the container with id 54ee650019adb7702a25890b33146fdc18973d0406c356054b844e33faf1aaad
	Oct 26 15:18:47 no-preload-954807 kubelet[766]: W1026 15:18:47.228386     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be WatchSource:0}: Error finding container 1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be: Status 404 returned error can't find the container with id 1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be
	Oct 26 15:18:50 no-preload-954807 kubelet[766]: I1026 15:18:50.711953     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:18:56 no-preload-954807 kubelet[766]: I1026 15:18:56.992018     766 scope.go:117] "RemoveContainer" containerID="5a1509739df1e6ab7e800389008a1fcbaa46d9c2bb85de5d2922dcc48df15006"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: I1026 15:18:57.013921     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mns4v" podStartSLOduration=5.980214341 podStartE2EDuration="11.01390391s" podCreationTimestamp="2025-10-26 15:18:46 +0000 UTC" firstStartedPulling="2025-10-26 15:18:47.214441629 +0000 UTC m=+13.724870651" lastFinishedPulling="2025-10-26 15:18:52.248131157 +0000 UTC m=+18.758560220" observedRunningTime="2025-10-26 15:18:53.00005421 +0000 UTC m=+19.510483232" watchObservedRunningTime="2025-10-26 15:18:57.01390391 +0000 UTC m=+23.524332932"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: I1026 15:18:57.996546     766 scope.go:117] "RemoveContainer" containerID="5a1509739df1e6ab7e800389008a1fcbaa46d9c2bb85de5d2922dcc48df15006"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: I1026 15:18:57.996934     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: E1026 15:18:57.997080     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:18:59 no-preload-954807 kubelet[766]: I1026 15:18:59.001124     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:18:59 no-preload-954807 kubelet[766]: E1026 15:18:59.001293     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:07 no-preload-954807 kubelet[766]: I1026 15:19:07.192441     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:19:08 no-preload-954807 kubelet[766]: I1026 15:19:08.033105     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:19:08 no-preload-954807 kubelet[766]: I1026 15:19:08.033416     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:08 no-preload-954807 kubelet[766]: E1026 15:19:08.033578     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:14 no-preload-954807 kubelet[766]: I1026 15:19:14.050727     766 scope.go:117] "RemoveContainer" containerID="7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544"
	Oct 26 15:19:17 no-preload-954807 kubelet[766]: I1026 15:19:17.192207     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:17 no-preload-954807 kubelet[766]: E1026 15:19:17.192939     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:29 no-preload-954807 kubelet[766]: I1026 15:19:29.823884     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:30 no-preload-954807 kubelet[766]: I1026 15:19:30.094676     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:30 no-preload-954807 kubelet[766]: I1026 15:19:30.094943     766 scope.go:117] "RemoveContainer" containerID="f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14"
	Oct 26 15:19:30 no-preload-954807 kubelet[766]: E1026 15:19:30.104387     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:34 no-preload-954807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:19:35 no-preload-954807 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:19:35 no-preload-954807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [821bf60d5210953702380bf2d035ceeea898a0c09c6c1ea9cb80ae3fc42d8fd0] <==
	2025/10/26 15:18:52 Using namespace: kubernetes-dashboard
	2025/10/26 15:18:52 Using in-cluster config to connect to apiserver
	2025/10/26 15:18:52 Using secret token for csrf signing
	2025/10/26 15:18:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:18:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:18:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:18:52 Generating JWE encryption key
	2025/10/26 15:18:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:18:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:18:52 Initializing JWE encryption key from synchronized object
	2025/10/26 15:18:52 Creating in-cluster Sidecar client
	2025/10/26 15:18:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:18:52 Serving insecurely on HTTP port: 9090
	2025/10/26 15:19:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:18:52 Starting overwatch
	
	
	==> storage-provisioner [2f1b442c63394a6e1e2d9967a43cfad768604badfe58c12bd0b44110c9f676b6] <==
	I1026 15:19:14.125431       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:19:14.184810       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:19:14.185001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:19:14.188140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:17.643656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:21.904547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:25.502667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:28.556885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:31.580139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:31.585330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:19:31.585568       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:19:31.585803       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-954807_07d39ba0-e5f7-421a-a809-c2383c72c62a!
	I1026 15:19:31.586070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81e2b564-6d77-48d7-9a32-6c72ab01dcb0", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-954807_07d39ba0-e5f7-421a-a809-c2383c72c62a became leader
	W1026 15:19:31.588406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:31.594705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:19:31.688957       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-954807_07d39ba0-e5f7-421a-a809-c2383c72c62a!
	W1026 15:19:33.597562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:33.602678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:35.613270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:35.633470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:37.641337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:37.646364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544] <==
	I1026 15:18:43.839840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:19:13.860545       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-954807 -n no-preload-954807
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-954807 -n no-preload-954807: exit status 2 (371.775992ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-954807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-954807
helpers_test.go:243: (dbg) docker inspect no-preload-954807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786",
	        "Created": "2025-10-26T15:16:49.517959935Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:18:25.253217969Z",
	            "FinishedAt": "2025-10-26T15:18:24.200414224Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/hostname",
	        "HostsPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/hosts",
	        "LogPath": "/var/lib/docker/containers/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786-json.log",
	        "Name": "/no-preload-954807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-954807:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-954807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786",
	                "LowerDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d429f28550a9da736d0ffdc204b6f10fda27eb3686f85e1d0cc72878bd1ee00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-954807",
	                "Source": "/var/lib/docker/volumes/no-preload-954807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-954807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-954807",
	                "name.minikube.sigs.k8s.io": "no-preload-954807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a9c6558973af4da05d4687081ad321bee74f16a14068b20d7d0ef5c2e8a0476",
	            "SandboxKey": "/var/run/docker/netns/5a9c6558973a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33850"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-954807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:4e:fa:1c:63:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85855106e1f3577e90f02f145412c517c0b5aba224f5d8005b2109486b8acb25",
	                    "EndpointID": "0f69efc50d7f590a2dbc36762c032253ce6c4e2310767f3d876ad684eff54bfb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-954807",
	                        "974a34e5ba04"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807: exit status 2 (394.053389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-954807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-954807 logs -n 25: (1.313619239s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                             │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                          │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                              │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                             │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                    │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                    │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                              │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                               │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                              │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-494684 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:18:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:18:24.873586  908785 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:18:24.873824  908785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:24.873852  908785 out.go:374] Setting ErrFile to fd 2...
	I1026 15:18:24.873873  908785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:24.874151  908785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:18:24.874543  908785 out.go:368] Setting JSON to false
	I1026 15:18:24.875517  908785 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18057,"bootTime":1761473848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:18:24.875610  908785 start.go:141] virtualization:  
	I1026 15:18:24.878798  908785 out.go:179] * [no-preload-954807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:18:24.882718  908785 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:18:24.882793  908785 notify.go:220] Checking for updates...
	I1026 15:18:24.886906  908785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:18:24.889801  908785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:24.892783  908785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:18:24.895757  908785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:18:24.898642  908785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:18:24.901948  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:24.902567  908785 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:18:24.948990  908785 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:18:24.949107  908785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:25.042150  908785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:18:25.031901314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:25.042260  908785 docker.go:318] overlay module found
	I1026 15:18:25.045300  908785 out.go:179] * Using the docker driver based on existing profile
	I1026 15:18:25.048156  908785 start.go:305] selected driver: docker
	I1026 15:18:25.048169  908785 start.go:925] validating driver "docker" against &{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:25.048276  908785 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:18:25.049069  908785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:25.141402  908785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:18:25.129156893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:25.141737  908785 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:25.141766  908785 cni.go:84] Creating CNI manager for ""
	I1026 15:18:25.141824  908785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:25.141856  908785 start.go:349] cluster config:
	{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:25.145012  908785 out.go:179] * Starting "no-preload-954807" primary control-plane node in "no-preload-954807" cluster
	I1026 15:18:25.147872  908785 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:18:25.150783  908785 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:18:25.153691  908785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:25.153844  908785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:18:25.154159  908785 cache.go:107] acquiring lock: {Name:mkbe2086c35e9fcbe8c03bdef4b41f05ca228154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154244  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:18:25.154253  908785 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.981µs
	I1026 15:18:25.154266  908785 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:18:25.154278  908785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:18:25.154523  908785 cache.go:107] acquiring lock: {Name:mk2325fad129f4b7d5aa09cccfdaa3da809a73fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154591  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:18:25.154599  908785 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 83.743µs
	I1026 15:18:25.154607  908785 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:18:25.154618  908785 cache.go:107] acquiring lock: {Name:mk54c57481d4cb891842b1b352451c8a69a47281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154662  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:18:25.154672  908785 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 56.033µs
	I1026 15:18:25.154686  908785 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:18:25.154696  908785 cache.go:107] acquiring lock: {Name:mk5a8cbd33cc84011ebd29296028bb78893eefc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154727  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:18:25.154731  908785 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 36.53µs
	I1026 15:18:25.154737  908785 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:18:25.154746  908785 cache.go:107] acquiring lock: {Name:mkaf3dfd27f1d15aad668c191c7cc85c71d2c9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154771  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:18:25.154776  908785 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.376µs
	I1026 15:18:25.154782  908785 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:18:25.154792  908785 cache.go:107] acquiring lock: {Name:mk964a36cda2ac1ad4a9006d14be02c6bd71c41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154916  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:18:25.154923  908785 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 131.685µs
	I1026 15:18:25.154929  908785 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:18:25.154963  908785 cache.go:107] acquiring lock: {Name:mkef4d9c96ab97f5a848fa8d925b343812fa37ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.155004  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:18:25.155014  908785 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 71.73µs
	I1026 15:18:25.155020  908785 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:18:25.155031  908785 cache.go:107] acquiring lock: {Name:mkc8d2557eb259bb5390e2f2db4396a6aec79411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.155060  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:18:25.155065  908785 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 35.389µs
	I1026 15:18:25.155076  908785 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:18:25.155087  908785 cache.go:87] Successfully saved all images to host disk.
	I1026 15:18:25.186482  908785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:18:25.186502  908785 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:18:25.186515  908785 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:18:25.186538  908785 start.go:360] acquireMachinesLock for no-preload-954807: {Name:mk3de11c10d64abd2c458c411445bde4bf32881c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.186600  908785 start.go:364] duration metric: took 46.409µs to acquireMachinesLock for "no-preload-954807"
	I1026 15:18:25.186620  908785 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:18:25.186626  908785 fix.go:54] fixHost starting: 
	I1026 15:18:25.186892  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:25.218587  908785 fix.go:112] recreateIfNeeded on no-preload-954807: state=Stopped err=<nil>
	W1026 15:18:25.218633  908785 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:18:23.824889  906105 out.go:252]   - Booting up control plane ...
	I1026 15:18:23.825002  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:23.825084  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:23.826750  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:23.843130  906105 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:23.843590  906105 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:23.851900  906105 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:23.852216  906105 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:23.852513  906105 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:24.001209  906105 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:24.001367  906105 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:25.996925  906105 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000832133s
	I1026 15:18:26.000302  906105 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:26.000400  906105 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1026 15:18:26.000511  906105 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:26.000594  906105 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:25.221939  908785 out.go:252] * Restarting existing docker container for "no-preload-954807" ...
	I1026 15:18:25.222028  908785 cli_runner.go:164] Run: docker start no-preload-954807
	I1026 15:18:25.539012  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:25.573935  908785 kic.go:430] container "no-preload-954807" state is running.
	I1026 15:18:25.574383  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:25.603715  908785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:18:25.604226  908785 machine.go:93] provisionDockerMachine start ...
	I1026 15:18:25.604316  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:25.634297  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:25.634626  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:25.634636  908785 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:18:25.636397  908785 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:18:28.841282  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:18:28.841360  908785 ubuntu.go:182] provisioning hostname "no-preload-954807"
	I1026 15:18:28.841444  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:28.866436  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:28.866762  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:28.866774  908785 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-954807 && echo "no-preload-954807" | sudo tee /etc/hostname
	I1026 15:18:29.069155  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:18:29.069302  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:29.098780  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:29.099104  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:29.099122  908785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-954807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-954807/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-954807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:18:29.276929  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:29.276952  908785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:18:29.276983  908785 ubuntu.go:190] setting up certificates
	I1026 15:18:29.276993  908785 provision.go:84] configureAuth start
	I1026 15:18:29.277060  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:29.299794  908785 provision.go:143] copyHostCerts
	I1026 15:18:29.299860  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:18:29.299879  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:18:29.299957  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:18:29.300067  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:18:29.300072  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:18:29.300099  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:18:29.300159  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:18:29.300168  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:18:29.300193  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:18:29.300245  908785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.no-preload-954807 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-954807]
	I1026 15:18:30.781617  906105 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.780617084s
	I1026 15:18:29.899785  908785 provision.go:177] copyRemoteCerts
	I1026 15:18:29.899900  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:18:29.899970  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:29.942702  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:30.078143  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:18:30.113207  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:18:30.146061  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:18:30.178703  908785 provision.go:87] duration metric: took 901.687509ms to configureAuth
	I1026 15:18:30.178771  908785 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:18:30.178995  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:30.179148  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.207087  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:30.207408  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:30.207425  908785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:18:30.676969  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:18:30.677026  908785 machine.go:96] duration metric: took 5.072780445s to provisionDockerMachine
	I1026 15:18:30.677052  908785 start.go:293] postStartSetup for "no-preload-954807" (driver="docker")
	I1026 15:18:30.677077  908785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:18:30.677149  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:18:30.677252  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.710413  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:30.823871  908785 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:18:30.827555  908785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:18:30.827587  908785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:18:30.827599  908785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:18:30.827656  908785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:18:30.827744  908785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:18:30.827864  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:18:30.838700  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:18:30.871356  908785 start.go:296] duration metric: took 194.275536ms for postStartSetup
	I1026 15:18:30.871461  908785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:18:30.871518  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.902387  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.034591  908785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:18:31.045225  908785 fix.go:56] duration metric: took 5.858591617s for fixHost
	I1026 15:18:31.045253  908785 start.go:83] releasing machines lock for "no-preload-954807", held for 5.85864381s
	I1026 15:18:31.045332  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:31.106399  908785 ssh_runner.go:195] Run: cat /version.json
	I1026 15:18:31.106456  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:31.106711  908785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:18:31.106777  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:31.151426  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.158586  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.396049  908785 ssh_runner.go:195] Run: systemctl --version
	I1026 15:18:31.403261  908785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:18:31.469937  908785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:18:31.482908  908785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:18:31.483041  908785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:18:31.493995  908785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:18:31.494066  908785 start.go:495] detecting cgroup driver to use...
	I1026 15:18:31.494113  908785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:18:31.494187  908785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:18:31.521177  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:18:31.541265  908785 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:18:31.541370  908785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:18:31.569119  908785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:18:31.584298  908785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:18:31.790771  908785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:18:32.003146  908785 docker.go:234] disabling docker service ...
	I1026 15:18:32.003270  908785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:18:32.027531  908785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:18:32.052390  908785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:18:32.244277  908785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:18:32.429463  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:18:32.445776  908785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:18:32.465349  908785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:18:32.465428  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.478857  908785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:18:32.478978  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.488961  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.499025  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.509768  908785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:18:32.519485  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.529990  908785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.539869  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.550905  908785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:18:32.559187  908785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:18:32.568293  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:32.731012  908785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:18:32.890143  908785 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:18:32.890243  908785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:18:32.895296  908785 start.go:563] Will wait 60s for crictl version
	I1026 15:18:32.895370  908785 ssh_runner.go:195] Run: which crictl
	I1026 15:18:32.899632  908785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:18:32.959445  908785 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:18:32.959551  908785 ssh_runner.go:195] Run: crio --version
	I1026 15:18:32.999198  908785 ssh_runner.go:195] Run: crio --version
	I1026 15:18:33.053114  908785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:18:32.381923  906105 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.381595886s
	I1026 15:18:34.004615  906105 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.004015537s
	I1026 15:18:34.039440  906105 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:34.060957  906105 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:34.093820  906105 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:34.094029  906105 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-494684 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:34.116373  906105 kubeadm.go:318] [bootstrap-token] Using token: opo3lq.zbfbsr53k4i0zecq
	I1026 15:18:33.056258  908785 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:18:33.077802  908785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:18:33.083627  908785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:33.094756  908785 kubeadm.go:883] updating cluster {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:18:33.094867  908785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:33.094911  908785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:33.140777  908785 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:18:33.140799  908785 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:18:33.140815  908785 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:18:33.140916  908785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-954807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:18:33.140993  908785 ssh_runner.go:195] Run: crio config
	I1026 15:18:33.234362  908785 cni.go:84] Creating CNI manager for ""
	I1026 15:18:33.234382  908785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:33.234396  908785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:18:33.234442  908785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-954807 NodeName:no-preload-954807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:18:33.234611  908785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-954807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:18:33.234704  908785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:18:33.244949  908785 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:18:33.245042  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:18:33.252734  908785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:18:33.266334  908785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:18:33.280280  908785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:18:33.300014  908785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:18:33.305316  908785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:33.315583  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:33.467826  908785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:33.491186  908785 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807 for IP: 192.168.85.2
	I1026 15:18:33.491220  908785 certs.go:195] generating shared ca certs ...
	I1026 15:18:33.491258  908785 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:33.491442  908785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:18:33.491517  908785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:18:33.491547  908785 certs.go:257] generating profile certs ...
	I1026 15:18:33.491665  908785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key
	I1026 15:18:33.491771  908785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805
	I1026 15:18:33.491845  908785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key
	I1026 15:18:33.492003  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:18:33.492056  908785 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:18:33.492084  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:18:33.492115  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:18:33.492158  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:18:33.492198  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:18:33.492264  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:18:33.493002  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:18:33.513517  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:18:33.532884  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:18:33.555231  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:18:33.579754  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:18:33.602447  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:18:33.628293  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:18:33.684754  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:18:33.753264  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:18:33.821238  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:18:33.843108  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:18:33.862371  908785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:18:33.878516  908785 ssh_runner.go:195] Run: openssl version
	I1026 15:18:33.885509  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:18:33.895167  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.900931  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.901140  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.967665  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:18:33.976773  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:18:33.985438  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:18:33.990423  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:18:33.990496  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:18:34.052535  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:18:34.062937  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:18:34.072240  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.076658  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.076793  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.127445  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:18:34.136993  908785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:18:34.141905  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:18:34.197715  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:18:34.255022  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:18:34.321728  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:18:34.389895  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:18:34.548526  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:18:34.681856  908785 kubeadm.go:400] StartCluster: {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:34.681971  908785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:18:34.682063  908785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:18:34.783414  908785 cri.go:89] found id: "c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6"
	I1026 15:18:34.783566  908785 cri.go:89] found id: "cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d"
	I1026 15:18:34.783587  908785 cri.go:89] found id: "62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959"
	I1026 15:18:34.783621  908785 cri.go:89] found id: "1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4"
	I1026 15:18:34.783639  908785 cri.go:89] found id: ""
	I1026 15:18:34.783719  908785 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:18:34.811816  908785 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:18:34.812057  908785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:18:34.827966  908785 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:18:34.828092  908785 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:18:34.828177  908785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:18:34.843255  908785 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:18:34.843698  908785 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-954807" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:34.843791  908785 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-954807" cluster setting kubeconfig missing "no-preload-954807" context setting]
	I1026 15:18:34.844059  908785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.845642  908785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:18:34.871634  908785 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 15:18:34.871666  908785 kubeadm.go:601] duration metric: took 43.554458ms to restartPrimaryControlPlane
	I1026 15:18:34.871675  908785 kubeadm.go:402] duration metric: took 189.829653ms to StartCluster
	I1026 15:18:34.871690  908785 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.871749  908785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:34.872330  908785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.872519  908785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:34.873018  908785 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:34.873111  908785 addons.go:69] Setting storage-provisioner=true in profile "no-preload-954807"
	I1026 15:18:34.873126  908785 addons.go:238] Setting addon storage-provisioner=true in "no-preload-954807"
	W1026 15:18:34.873137  908785 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:18:34.873163  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.873189  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:34.873264  908785 addons.go:69] Setting dashboard=true in profile "no-preload-954807"
	I1026 15:18:34.873297  908785 addons.go:238] Setting addon dashboard=true in "no-preload-954807"
	W1026 15:18:34.873336  908785 addons.go:247] addon dashboard should already be in state true
	I1026 15:18:34.873368  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.873660  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.877946  908785 addons.go:69] Setting default-storageclass=true in profile "no-preload-954807"
	I1026 15:18:34.878023  908785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-954807"
	I1026 15:18:34.877565  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.878787  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.877575  908785 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:34.888833  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:34.921307  908785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:18:34.925761  908785 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:34.925783  908785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:34.925866  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:34.941729  908785 addons.go:238] Setting addon default-storageclass=true in "no-preload-954807"
	W1026 15:18:34.941762  908785 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:18:34.941790  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.942216  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.950093  908785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:18:34.956801  908785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:18:34.119508  906105 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:34.119644  906105 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:34.125645  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:34.136618  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:34.144003  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:34.155143  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:34.162423  906105 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:34.413457  906105 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:35.074961  906105 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:35.413379  906105 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:35.414997  906105 kubeadm.go:318] 
	I1026 15:18:35.415072  906105 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:35.415078  906105 kubeadm.go:318] 
	I1026 15:18:35.415155  906105 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:35.415160  906105 kubeadm.go:318] 
	I1026 15:18:35.415185  906105 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:35.419772  906105 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:35.419856  906105 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:35.419905  906105 kubeadm.go:318] 
	I1026 15:18:35.420002  906105 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:35.420007  906105 kubeadm.go:318] 
	I1026 15:18:35.420066  906105 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:35.420070  906105 kubeadm.go:318] 
	I1026 15:18:35.420148  906105 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:35.420235  906105 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:35.420314  906105 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:35.420324  906105 kubeadm.go:318] 
	I1026 15:18:35.420408  906105 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:35.420488  906105 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:35.420497  906105 kubeadm.go:318] 
	I1026 15:18:35.420612  906105 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token opo3lq.zbfbsr53k4i0zecq \
	I1026 15:18:35.420744  906105 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:18:35.420794  906105 kubeadm.go:318] 	--control-plane 
	I1026 15:18:35.420800  906105 kubeadm.go:318] 
	I1026 15:18:35.420895  906105 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:35.420900  906105 kubeadm.go:318] 
	I1026 15:18:35.420998  906105 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token opo3lq.zbfbsr53k4i0zecq \
	I1026 15:18:35.421110  906105 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:18:35.440042  906105 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:18:35.440280  906105 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:18:35.440391  906105 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:35.440407  906105 cni.go:84] Creating CNI manager for ""
	I1026 15:18:35.440414  906105 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:35.444207  906105 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:18:35.447185  906105 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:18:35.456310  906105 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:18:35.456334  906105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:18:35.507388  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:18:36.090917  906105 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:36.091006  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:36.091050  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-494684 minikube.k8s.io/updated_at=2025_10_26T15_18_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=default-k8s-diff-port-494684 minikube.k8s.io/primary=true
	I1026 15:18:36.514936  906105 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:36.515052  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:37.015410  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:37.515116  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:38.015362  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:38.515615  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:39.015108  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:39.233922  906105 kubeadm.go:1113] duration metric: took 3.142974166s to wait for elevateKubeSystemPrivileges
	I1026 15:18:39.233954  906105 kubeadm.go:402] duration metric: took 23.046817686s to StartCluster
	I1026 15:18:39.233975  906105 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:39.234032  906105 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:39.235069  906105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:39.235311  906105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:18:39.235322  906105 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:39.235586  906105 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:39.235621  906105 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:39.235684  906105 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-494684"
	I1026 15:18:39.235698  906105 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-494684"
	I1026 15:18:39.235723  906105 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:18:39.236178  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.236758  906105 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-494684"
	I1026 15:18:39.236781  906105 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-494684"
	I1026 15:18:39.237117  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.240502  906105 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:39.252908  906105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:39.270053  906105 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-494684"
	I1026 15:18:39.270095  906105 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:18:39.270522  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.282068  906105 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:18:34.959584  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:18:34.959611  908785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:18:34.959687  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:34.980722  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:34.990491  908785 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:34.990523  908785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:34.990600  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:35.026564  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:35.044932  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:35.366750  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:18:35.366822  908785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:18:35.430297  908785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:35.447981  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:35.526736  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:18:35.526816  908785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:18:35.541300  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:35.557089  908785 node_ready.go:35] waiting up to 6m0s for node "no-preload-954807" to be "Ready" ...
	I1026 15:18:35.640785  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:18:35.640819  908785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:18:35.771188  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:18:35.771215  908785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:18:35.825305  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:18:35.825332  908785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:18:35.945173  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:18:35.945241  908785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:18:36.043908  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:18:36.043985  908785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:18:36.074085  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:18:36.074164  908785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:18:36.114626  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:18:36.114697  908785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:18:36.162322  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:18:39.285064  906105 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:39.285091  906105 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:39.285174  906105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:18:39.313693  906105 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:39.313726  906105 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:39.313788  906105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:18:39.329825  906105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:18:39.352237  906105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:18:39.833145  906105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:18:39.835130  906105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:39.865906  906105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:39.891557  906105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:41.038716  906105 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.205461428s)
	I1026 15:18:41.038845  906105 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 15:18:41.038811  906105 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203616764s)
	I1026 15:18:41.039823  906105 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:18:41.560927  906105 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-494684" context rescaled to 1 replicas
	I1026 15:18:41.767543  906105 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.901592798s)
	I1026 15:18:41.767597  906105 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.875965389s)
	I1026 15:18:41.789656  906105 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:18:41.792471  906105 addons.go:514] duration metric: took 2.556838269s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:18:42.009830  908785 node_ready.go:49] node "no-preload-954807" is "Ready"
	I1026 15:18:42.009866  908785 node_ready.go:38] duration metric: took 6.452696965s for node "no-preload-954807" to be "Ready" ...
	I1026 15:18:42.009885  908785 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:42.009955  908785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:44.074337  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.626316807s)
	I1026 15:18:44.074430  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.533054521s)
	I1026 15:18:44.093634  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.931220048s)
	I1026 15:18:44.093821  908785 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.083847423s)
	I1026 15:18:44.093842  908785 api_server.go:72] duration metric: took 9.221303285s to wait for apiserver process to appear ...
	I1026 15:18:44.093849  908785 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:44.093871  908785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:18:44.096535  908785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-954807 addons enable metrics-server
	
	I1026 15:18:44.100991  908785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:18:44.103937  908785 addons.go:514] duration metric: took 9.230903875s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:18:44.105206  908785 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:18:44.106296  908785 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:44.106318  908785 api_server.go:131] duration metric: took 12.458566ms to wait for apiserver health ...
	I1026 15:18:44.106327  908785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:44.109695  908785 system_pods.go:59] 8 kube-system pods found
	I1026 15:18:44.109733  908785 system_pods.go:61] "coredns-66bc5c9577-7xjmh" [7c8cb8b7-9202-4e22-bc6b-db89e79c7589] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:44.109742  908785 system_pods.go:61] "etcd-no-preload-954807" [52c031cf-4dde-4c04-8883-80b3a9be7df3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:44.109750  908785 system_pods.go:61] "kindnet-9grs2" [24f115af-1173-42c3-a38d-af5044b515d6] Running
	I1026 15:18:44.109757  908785 system_pods.go:61] "kube-apiserver-no-preload-954807" [19b0fdfa-be5b-4363-91e4-5e49e816a746] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:44.109764  908785 system_pods.go:61] "kube-controller-manager-no-preload-954807" [cd19e3f8-151b-4b3e-b857-571a59f57f44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:44.109768  908785 system_pods.go:61] "kube-proxy-q8nns" [f407a5bf-332b-4393-8250-e22d40da01f9] Running
	I1026 15:18:44.109775  908785 system_pods.go:61] "kube-scheduler-no-preload-954807" [ddb87e7c-a779-4c46-b2af-bfe48e908828] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:44.109780  908785 system_pods.go:61] "storage-provisioner" [5cb08c14-ee23-4e69-b4b7-e5ef184ed78e] Running
	I1026 15:18:44.109786  908785 system_pods.go:74] duration metric: took 3.453281ms to wait for pod list to return data ...
	I1026 15:18:44.109794  908785 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:44.112368  908785 default_sa.go:45] found service account: "default"
	I1026 15:18:44.112388  908785 default_sa.go:55] duration metric: took 2.586901ms for default service account to be created ...
	I1026 15:18:44.112396  908785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:44.115134  908785 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:44.115216  908785 system_pods.go:89] "coredns-66bc5c9577-7xjmh" [7c8cb8b7-9202-4e22-bc6b-db89e79c7589] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:44.115250  908785 system_pods.go:89] "etcd-no-preload-954807" [52c031cf-4dde-4c04-8883-80b3a9be7df3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:44.115283  908785 system_pods.go:89] "kindnet-9grs2" [24f115af-1173-42c3-a38d-af5044b515d6] Running
	I1026 15:18:44.115306  908785 system_pods.go:89] "kube-apiserver-no-preload-954807" [19b0fdfa-be5b-4363-91e4-5e49e816a746] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:44.115328  908785 system_pods.go:89] "kube-controller-manager-no-preload-954807" [cd19e3f8-151b-4b3e-b857-571a59f57f44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:44.115352  908785 system_pods.go:89] "kube-proxy-q8nns" [f407a5bf-332b-4393-8250-e22d40da01f9] Running
	I1026 15:18:44.115383  908785 system_pods.go:89] "kube-scheduler-no-preload-954807" [ddb87e7c-a779-4c46-b2af-bfe48e908828] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:44.115402  908785 system_pods.go:89] "storage-provisioner" [5cb08c14-ee23-4e69-b4b7-e5ef184ed78e] Running
	I1026 15:18:44.115424  908785 system_pods.go:126] duration metric: took 3.020964ms to wait for k8s-apps to be running ...
	I1026 15:18:44.115449  908785 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:44.115528  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:44.132618  908785 system_svc.go:56] duration metric: took 17.163659ms WaitForService to wait for kubelet
	I1026 15:18:44.132642  908785 kubeadm.go:586] duration metric: took 9.260101546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:44.132663  908785 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:44.135549  908785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:18:44.135577  908785 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:44.135589  908785 node_conditions.go:105] duration metric: took 2.919573ms to run NodePressure ...
	I1026 15:18:44.135602  908785 start.go:241] waiting for startup goroutines ...
	I1026 15:18:44.135610  908785 start.go:246] waiting for cluster config update ...
	I1026 15:18:44.135620  908785 start.go:255] writing updated cluster config ...
	I1026 15:18:44.135912  908785 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:44.139910  908785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:44.143746  908785 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7xjmh" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:43.043031  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:45.079469  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:46.199968  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:48.651556  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:47.542993  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:50.043710  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:52.043878  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:51.150539  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:53.150747  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:54.543184  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:57.043649  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:55.650612  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:57.655226  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:59.545897  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:02.043891  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:00.154271  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:02.649805  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:04.650562  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:04.543488  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:07.043487  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:07.149715  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:09.650530  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:09.542582  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:11.543176  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:12.149228  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:14.157707  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:14.043223  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:16.043564  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:16.651190  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:19.150299  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:18.543877  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:21.042737  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	I1026 15:19:21.149393  908785 pod_ready.go:94] pod "coredns-66bc5c9577-7xjmh" is "Ready"
	I1026 15:19:21.149423  908785 pod_ready.go:86] duration metric: took 37.005599421s for pod "coredns-66bc5c9577-7xjmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.152545  908785 pod_ready.go:83] waiting for pod "etcd-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.157626  908785 pod_ready.go:94] pod "etcd-no-preload-954807" is "Ready"
	I1026 15:19:21.157652  908785 pod_ready.go:86] duration metric: took 5.07725ms for pod "etcd-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.160404  908785 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.165211  908785 pod_ready.go:94] pod "kube-apiserver-no-preload-954807" is "Ready"
	I1026 15:19:21.165241  908785 pod_ready.go:86] duration metric: took 4.811401ms for pod "kube-apiserver-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.171007  908785 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.347722  908785 pod_ready.go:94] pod "kube-controller-manager-no-preload-954807" is "Ready"
	I1026 15:19:21.347751  908785 pod_ready.go:86] duration metric: took 176.720385ms for pod "kube-controller-manager-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.547746  908785 pod_ready.go:83] waiting for pod "kube-proxy-q8nns" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.947461  908785 pod_ready.go:94] pod "kube-proxy-q8nns" is "Ready"
	I1026 15:19:21.947490  908785 pod_ready.go:86] duration metric: took 399.680606ms for pod "kube-proxy-q8nns" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.147722  908785 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.548568  908785 pod_ready.go:94] pod "kube-scheduler-no-preload-954807" is "Ready"
	I1026 15:19:22.548648  908785 pod_ready.go:86] duration metric: took 400.89538ms for pod "kube-scheduler-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.548677  908785 pod_ready.go:40] duration metric: took 38.40866909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:22.645734  908785 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:19:22.649853  908785 out.go:179] * Done! kubectl is now configured to use "no-preload-954807" cluster and "default" namespace by default
	I1026 15:19:22.543199  906105 node_ready.go:49] node "default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:22.543232  906105 node_ready.go:38] duration metric: took 41.503374902s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:19:22.543247  906105 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:19:22.543322  906105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:19:22.558433  906105 api_server.go:72] duration metric: took 43.323081637s to wait for apiserver process to appear ...
	I1026 15:19:22.558456  906105 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:19:22.558476  906105 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1026 15:19:22.574126  906105 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1026 15:19:22.576212  906105 api_server.go:141] control plane version: v1.34.1
	I1026 15:19:22.576245  906105 api_server.go:131] duration metric: took 17.782398ms to wait for apiserver health ...
	I1026 15:19:22.576254  906105 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:19:22.585670  906105 system_pods.go:59] 8 kube-system pods found
	I1026 15:19:22.585709  906105 system_pods.go:61] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.585717  906105 system_pods.go:61] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.585725  906105 system_pods.go:61] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.585730  906105 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.585736  906105 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.585746  906105 system_pods.go:61] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.585754  906105 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.585761  906105 system_pods.go:61] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.585776  906105 system_pods.go:74] duration metric: took 9.51752ms to wait for pod list to return data ...
	I1026 15:19:22.585789  906105 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:19:22.589014  906105 default_sa.go:45] found service account: "default"
	I1026 15:19:22.589043  906105 default_sa.go:55] duration metric: took 3.244286ms for default service account to be created ...
	I1026 15:19:22.589054  906105 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:19:22.597482  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:22.597521  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.597529  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.597536  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.597541  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.597546  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.597551  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.597557  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.597566  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.597592  906105 retry.go:31] will retry after 270.93989ms: missing components: kube-dns
	I1026 15:19:22.890382  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:22.890422  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.890429  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.890436  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.890442  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.890447  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.890454  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.890458  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.890466  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.890480  906105 retry.go:31] will retry after 311.29252ms: missing components: kube-dns
	I1026 15:19:23.207300  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.207338  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:23.207345  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.207352  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.207356  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.207360  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.207365  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.207369  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.207375  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:23.207394  906105 retry.go:31] will retry after 338.060587ms: missing components: kube-dns
	I1026 15:19:23.549179  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.549216  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:23.549224  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.549231  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.549235  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.549239  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.549244  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.549248  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.549254  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:23.549269  906105 retry.go:31] will retry after 395.592761ms: missing components: kube-dns
	I1026 15:19:23.949803  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.949839  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Running
	I1026 15:19:23.949846  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.949854  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.949861  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.949866  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.949874  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.949879  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.949884  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Running
	I1026 15:19:23.949892  906105 system_pods.go:126] duration metric: took 1.360831952s to wait for k8s-apps to be running ...
	I1026 15:19:23.949905  906105 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:19:23.949966  906105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:19:23.964269  906105 system_svc.go:56] duration metric: took 14.355022ms WaitForService to wait for kubelet
	I1026 15:19:23.964297  906105 kubeadm.go:586] duration metric: took 44.728950966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:19:23.964316  906105 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:19:23.967634  906105 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:19:23.967669  906105 node_conditions.go:123] node cpu capacity is 2
	I1026 15:19:23.967684  906105 node_conditions.go:105] duration metric: took 3.327873ms to run NodePressure ...
	I1026 15:19:23.967696  906105 start.go:241] waiting for startup goroutines ...
	I1026 15:19:23.967745  906105 start.go:246] waiting for cluster config update ...
	I1026 15:19:23.967757  906105 start.go:255] writing updated cluster config ...
	I1026 15:19:23.968071  906105 ssh_runner.go:195] Run: rm -f paused
	I1026 15:19:23.972391  906105 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:23.978846  906105 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.984309  906105 pod_ready.go:94] pod "coredns-66bc5c9577-zm8vb" is "Ready"
	I1026 15:19:23.984341  906105 pod_ready.go:86] duration metric: took 5.466432ms for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.987133  906105 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.992540  906105 pod_ready.go:94] pod "etcd-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:23.992578  906105 pod_ready.go:86] duration metric: took 5.419399ms for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.995145  906105 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.999951  906105 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:23.999979  906105 pod_ready.go:86] duration metric: took 4.806707ms for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.003124  906105 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.376257  906105 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:24.376287  906105 pod_ready.go:86] duration metric: took 373.130356ms for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.576393  906105 pod_ready.go:83] waiting for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.976974  906105 pod_ready.go:94] pod "kube-proxy-nbcd6" is "Ready"
	I1026 15:19:24.977002  906105 pod_ready.go:86] duration metric: took 400.540602ms for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.178270  906105 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.576418  906105 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:25.576444  906105 pod_ready.go:86] duration metric: took 398.150209ms for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.576456  906105 pod_ready.go:40] duration metric: took 1.604033075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:25.629832  906105 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:19:25.636667  906105 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-494684" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.786836377Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.790327623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.790358885Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.79038204Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.794256864Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.79428962Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.794306079Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.810536412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.810573105Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.81059786Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.814234651Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:19:23 no-preload-954807 crio[648]: time="2025-10-26T15:19:23.814272912Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.825280743Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=146866d4-6e05-44f7-81aa-0d0a7f71b1a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.826200195Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2c14f395-a4dd-4f86-a40b-8aa3f6b203a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.827309048Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper" id=25d866a9-6224-4498-a8a8-6a3d1298d072 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.827445312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.834846181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.835386208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.862691878Z" level=info msg="Created container f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper" id=25d866a9-6224-4498-a8a8-6a3d1298d072 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.863786881Z" level=info msg="Starting container: f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14" id=bf9cb6ab-e5c8-4e6d-afc4-77a6bca3b3ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:19:29 no-preload-954807 crio[648]: time="2025-10-26T15:19:29.865782103Z" level=info msg="Started container" PID=1715 containerID=f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper id=bf9cb6ab-e5c8-4e6d-afc4-77a6bca3b3ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be
	Oct 26 15:19:29 no-preload-954807 conmon[1713]: conmon f04c6bfff6203a1a10d4 <ninfo>: container 1715 exited with status 1
	Oct 26 15:19:30 no-preload-954807 crio[648]: time="2025-10-26T15:19:30.098232372Z" level=info msg="Removing container: d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245" id=c6401641-0ff1-4364-8cbb-e06b2e2bebc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:19:30 no-preload-954807 crio[648]: time="2025-10-26T15:19:30.115729943Z" level=info msg="Error loading conmon cgroup of container d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245: cgroup deleted" id=c6401641-0ff1-4364-8cbb-e06b2e2bebc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:19:30 no-preload-954807 crio[648]: time="2025-10-26T15:19:30.119319783Z" level=info msg="Removed container d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr/dashboard-metrics-scraper" id=c6401641-0ff1-4364-8cbb-e06b2e2bebc2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f04c6bfff6203       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   1ab8e350af3c6       dashboard-metrics-scraper-6ffb444bf9-s2lnr   kubernetes-dashboard
	2f1b442c63394       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           26 seconds ago       Running             storage-provisioner         2                   20b3c25034a57       storage-provisioner                          kube-system
	821bf60d52109       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   54ee650019adb       kubernetes-dashboard-855c9754f9-mns4v        kubernetes-dashboard
	00bf5ba9f6f7e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   f3c4e4d8a5fad       coredns-66bc5c9577-7xjmh                     kube-system
	15ddf611b7c06       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   483b5c42d101e       busybox                                      default
	7f2f05ce22257       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           57 seconds ago       Exited              storage-provisioner         1                   20b3c25034a57       storage-provisioner                          kube-system
	3d0489895ef79       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   e51963a248cfe       kindnet-9grs2                                kube-system
	752e98dc5d452       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   693fdf4148b6d       kube-proxy-q8nns                             kube-system
	c4a70523738c5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fd900d9d1df8b       kube-controller-manager-no-preload-954807    kube-system
	cb2dbcb5faf83       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a854f42143668       kube-scheduler-no-preload-954807             kube-system
	62ad6fae814dc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   70d4de6a1280e       etcd-no-preload-954807                       kube-system
	1eb364639f4fd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6bc72f1ad37fc       kube-apiserver-no-preload-954807             kube-system
	
	
	==> coredns [00bf5ba9f6f7eb7ee174165b87d6143905a98c7e287e18bce58f41e656d7f5ef] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50782 - 54189 "HINFO IN 2416344462135623600.4311425446834346707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026133145s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-954807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-954807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=no-preload-954807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_17_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-954807
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:19:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:19:13 +0000   Sun, 26 Oct 2025 15:17:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-954807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c4720016-79cb-477b-b38d-c7121463d568
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-7xjmh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     118s
	  kube-system                 etcd-no-preload-954807                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-9grs2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-no-preload-954807              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-no-preload-954807     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-q8nns                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-no-preload-954807              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s2lnr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mns4v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 117s                   kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m4s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           119s                   node-controller  Node no-preload-954807 event: Registered Node no-preload-954807 in Controller
	  Normal   NodeReady                104s                   kubelet          Node no-preload-954807 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node no-preload-954807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node no-preload-954807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node no-preload-954807 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node no-preload-954807 event: Registered Node no-preload-954807 in Controller
	
	
	==> dmesg <==
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959] <==
	{"level":"warn","ts":"2025-10-26T15:18:38.486945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.520094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.573258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.620640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.679794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.704511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.728904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.749070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.767539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.783697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.807548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.837664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.868243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.886187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.940642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.969012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:38.998920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.072064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.099677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.114034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.183711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.231379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.308304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.367928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:39.693944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50596","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:19:40 up  5:02,  0 user,  load average: 3.45, 3.55, 3.09
	Linux no-preload-954807 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3d0489895ef7987f8267922d4be82aea65bc786b1bc5d8331329f91f3b06f873] <==
	I1026 15:18:43.560744       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:18:43.560977       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:18:43.565489       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:18:43.565519       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:18:43.565536       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:18:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:18:43.786616       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:18:43.786647       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:18:43.786659       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:18:43.787443       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:19:13.780646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:19:13.787318       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:19:13.787418       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:19:13.787316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:19:15.087497       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:19:15.087537       1 metrics.go:72] Registering metrics
	I1026 15:19:15.087602       1 controller.go:711] "Syncing nftables rules"
	I1026 15:19:23.782355       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:19:23.782480       1 main.go:301] handling current node
	I1026 15:19:33.781202       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:19:33.781231       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4] <==
	I1026 15:18:42.038031       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:18:42.135223       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:18:42.145892       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:18:42.186405       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:18:42.186458       1 policy_source.go:240] refreshing policies
	I1026 15:18:42.256928       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:18:42.266425       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:18:42.293223       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:18:42.293316       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:42.293359       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 15:18:42.293385       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:18:42.293392       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:18:42.294640       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1026 15:18:42.360349       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:18:42.728489       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:18:42.765106       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:18:43.483518       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:18:43.652077       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:18:43.789072       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:18:43.857151       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:18:44.027943       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.141.213"}
	I1026 15:18:44.084957       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.143.214"}
	I1026 15:18:46.545989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:18:46.644429       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:18:46.744315       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6] <==
	I1026 15:18:46.250274       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:18:46.254537       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:18:46.257786       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:18:46.259088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:18:46.259174       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:18:46.283063       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:18:46.283064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:18:46.283118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:18:46.285383       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:18:46.287671       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:18:46.287686       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:18:46.288343       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:18:46.288410       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:18:46.289604       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:18:46.290767       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:18:46.290847       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:18:46.290915       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-954807"
	I1026 15:18:46.290958       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:18:46.292963       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:18:46.294035       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:18:46.317977       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:18:46.318006       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:18:46.318013       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:18:46.318153       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:18:46.321373       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [752e98dc5d452109116989f3da58948224ad6572aecbb195926fc5bbad6b9f8c] <==
	I1026 15:18:44.013860       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:18:44.214429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:18:44.315014       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:18:44.315145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:18:44.315260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:18:44.349331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:18:44.349443       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:18:44.353320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:18:44.353654       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:18:44.353829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:44.355242       1 config.go:200] "Starting service config controller"
	I1026 15:18:44.355295       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:18:44.355337       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:18:44.355363       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:18:44.355396       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:18:44.355422       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:18:44.356123       1 config.go:309] "Starting node config controller"
	I1026 15:18:44.356173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:18:44.356201       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:18:44.456955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:18:44.457075       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:18:44.457137       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d] <==
	I1026 15:18:39.993967       1 serving.go:386] Generated self-signed cert in-memory
	I1026 15:18:44.568145       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:18:44.570784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:44.576956       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 15:18:44.577008       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 15:18:44.577045       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:18:44.577063       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:18:44.577086       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:18:44.577100       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:18:44.577326       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:18:44.577428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:18:44.677910       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 15:18:44.677991       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:18:44.678052       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:18:46 no-preload-954807 kubelet[766]: I1026 15:18:46.892420     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/89db4534-81ce-41d2-b3fa-771b17a5d05b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mns4v\" (UID: \"89db4534-81ce-41d2-b3fa-771b17a5d05b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mns4v"
	Oct 26 15:18:47 no-preload-954807 kubelet[766]: W1026 15:18:47.210691     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-54ee650019adb7702a25890b33146fdc18973d0406c356054b844e33faf1aaad WatchSource:0}: Error finding container 54ee650019adb7702a25890b33146fdc18973d0406c356054b844e33faf1aaad: Status 404 returned error can't find the container with id 54ee650019adb7702a25890b33146fdc18973d0406c356054b844e33faf1aaad
	Oct 26 15:18:47 no-preload-954807 kubelet[766]: W1026 15:18:47.228386     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/974a34e5ba04342c804de8db785e3a0787f580e052424df5a8159d9faef26786/crio-1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be WatchSource:0}: Error finding container 1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be: Status 404 returned error can't find the container with id 1ab8e350af3c6de64d648b68daebf8c44c3fbe5a41a1927f4e2d8aa1082743be
	Oct 26 15:18:50 no-preload-954807 kubelet[766]: I1026 15:18:50.711953     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:18:56 no-preload-954807 kubelet[766]: I1026 15:18:56.992018     766 scope.go:117] "RemoveContainer" containerID="5a1509739df1e6ab7e800389008a1fcbaa46d9c2bb85de5d2922dcc48df15006"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: I1026 15:18:57.013921     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mns4v" podStartSLOduration=5.980214341 podStartE2EDuration="11.01390391s" podCreationTimestamp="2025-10-26 15:18:46 +0000 UTC" firstStartedPulling="2025-10-26 15:18:47.214441629 +0000 UTC m=+13.724870651" lastFinishedPulling="2025-10-26 15:18:52.248131157 +0000 UTC m=+18.758560220" observedRunningTime="2025-10-26 15:18:53.00005421 +0000 UTC m=+19.510483232" watchObservedRunningTime="2025-10-26 15:18:57.01390391 +0000 UTC m=+23.524332932"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: I1026 15:18:57.996546     766 scope.go:117] "RemoveContainer" containerID="5a1509739df1e6ab7e800389008a1fcbaa46d9c2bb85de5d2922dcc48df15006"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: I1026 15:18:57.996934     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:18:57 no-preload-954807 kubelet[766]: E1026 15:18:57.997080     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:18:59 no-preload-954807 kubelet[766]: I1026 15:18:59.001124     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:18:59 no-preload-954807 kubelet[766]: E1026 15:18:59.001293     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:07 no-preload-954807 kubelet[766]: I1026 15:19:07.192441     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:19:08 no-preload-954807 kubelet[766]: I1026 15:19:08.033105     766 scope.go:117] "RemoveContainer" containerID="e89772d29a75c21f5f8370bdcfba167e6169af9d261928abcd3420fdc62339f8"
	Oct 26 15:19:08 no-preload-954807 kubelet[766]: I1026 15:19:08.033416     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:08 no-preload-954807 kubelet[766]: E1026 15:19:08.033578     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:14 no-preload-954807 kubelet[766]: I1026 15:19:14.050727     766 scope.go:117] "RemoveContainer" containerID="7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544"
	Oct 26 15:19:17 no-preload-954807 kubelet[766]: I1026 15:19:17.192207     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:17 no-preload-954807 kubelet[766]: E1026 15:19:17.192939     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:29 no-preload-954807 kubelet[766]: I1026 15:19:29.823884     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:30 no-preload-954807 kubelet[766]: I1026 15:19:30.094676     766 scope.go:117] "RemoveContainer" containerID="d2a8203c308a32860104b35f28a0f1aeb81ec521c942b20e6a2700433430e245"
	Oct 26 15:19:30 no-preload-954807 kubelet[766]: I1026 15:19:30.094943     766 scope.go:117] "RemoveContainer" containerID="f04c6bfff6203a1a10d454b2fbcf80e1ae450d2a29e526a98e281c409a3afb14"
	Oct 26 15:19:30 no-preload-954807 kubelet[766]: E1026 15:19:30.104387     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2lnr_kubernetes-dashboard(6280217f-1658-43de-8596-66ca6e7bc11d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2lnr" podUID="6280217f-1658-43de-8596-66ca6e7bc11d"
	Oct 26 15:19:34 no-preload-954807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:19:35 no-preload-954807 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:19:35 no-preload-954807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [821bf60d5210953702380bf2d035ceeea898a0c09c6c1ea9cb80ae3fc42d8fd0] <==
	2025/10/26 15:18:52 Starting overwatch
	2025/10/26 15:18:52 Using namespace: kubernetes-dashboard
	2025/10/26 15:18:52 Using in-cluster config to connect to apiserver
	2025/10/26 15:18:52 Using secret token for csrf signing
	2025/10/26 15:18:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:18:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:18:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:18:52 Generating JWE encryption key
	2025/10/26 15:18:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:18:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:18:52 Initializing JWE encryption key from synchronized object
	2025/10/26 15:18:52 Creating in-cluster Sidecar client
	2025/10/26 15:18:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:18:52 Serving insecurely on HTTP port: 9090
	2025/10/26 15:19:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2f1b442c63394a6e1e2d9967a43cfad768604badfe58c12bd0b44110c9f676b6] <==
	I1026 15:19:14.125431       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:19:14.184810       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:19:14.185001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:19:14.188140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:17.643656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:21.904547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:25.502667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:28.556885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:31.580139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:31.585330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:19:31.585568       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:19:31.585803       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-954807_07d39ba0-e5f7-421a-a809-c2383c72c62a!
	I1026 15:19:31.586070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81e2b564-6d77-48d7-9a32-6c72ab01dcb0", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-954807_07d39ba0-e5f7-421a-a809-c2383c72c62a became leader
	W1026 15:19:31.588406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:31.594705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:19:31.688957       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-954807_07d39ba0-e5f7-421a-a809-c2383c72c62a!
	W1026 15:19:33.597562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:33.602678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:35.613270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:35.633470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:37.641337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:37.646364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:39.649257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:39.664347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7f2f05ce2225712b79d6dc8145ff0ce7d1e85670f693e7957759cca5f7d9b544] <==
	I1026 15:18:43.839840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:19:13.860545       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-954807 -n no-preload-954807
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-954807 -n no-preload-954807: exit status 2 (347.639136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-954807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.338637ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:19:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-494684 describe deploy/metrics-server -n kube-system: exit status 1 (84.843444ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-494684 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-494684
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-494684:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5",
	        "Created": "2025-10-26T15:18:07.847117574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 906518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:18:07.926578496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/hosts",
	        "LogPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5-json.log",
	        "Name": "/default-k8s-diff-port-494684",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-494684:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-494684",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5",
	                "LowerDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-494684",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-494684/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-494684",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-494684",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-494684",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a933ca8de7daee23f41c769d723a13c2db283960ff366a493eae9722c0a85db",
	            "SandboxKey": "/var/run/docker/netns/7a933ca8de7d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-494684": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:59:ea:28:fc:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3a8cf1602f3f72d6a70a2be8fdd96fd095eb34b48ad075b2aa41a3d8b9118a52",
	                    "EndpointID": "83f6c6940b8536a81938d40a0632c22010bff9fd5f16b3576c7cfa3a66421ded",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-494684",
	                        "ff68c01604a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-494684 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-494684 logs -n 25: (1.575275507s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-304880 image list --format=json                                                                                                                          │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p old-k8s-version-304880 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ start   │ -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p old-k8s-version-304880                                                                                                                                                │ old-k8s-version-304880       │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-018497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │                     │
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                             │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                          │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                              │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                             │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                    │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                    │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                              │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                               │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                              │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:18:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:18:24.873586  908785 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:18:24.873824  908785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:24.873852  908785 out.go:374] Setting ErrFile to fd 2...
	I1026 15:18:24.873873  908785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:18:24.874151  908785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:18:24.874543  908785 out.go:368] Setting JSON to false
	I1026 15:18:24.875517  908785 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18057,"bootTime":1761473848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:18:24.875610  908785 start.go:141] virtualization:  
	I1026 15:18:24.878798  908785 out.go:179] * [no-preload-954807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:18:24.882718  908785 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:18:24.882793  908785 notify.go:220] Checking for updates...
	I1026 15:18:24.886906  908785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:18:24.889801  908785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:24.892783  908785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:18:24.895757  908785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:18:24.898642  908785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:18:24.901948  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:24.902567  908785 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:18:24.948990  908785 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:18:24.949107  908785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:25.042150  908785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:18:25.031901314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:25.042260  908785 docker.go:318] overlay module found
	I1026 15:18:25.045300  908785 out.go:179] * Using the docker driver based on existing profile
	I1026 15:18:25.048156  908785 start.go:305] selected driver: docker
	I1026 15:18:25.048169  908785 start.go:925] validating driver "docker" against &{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:25.048276  908785 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:18:25.049069  908785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:18:25.141402  908785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:18:25.129156893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:18:25.141737  908785 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:25.141766  908785 cni.go:84] Creating CNI manager for ""
	I1026 15:18:25.141824  908785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:25.141856  908785 start.go:349] cluster config:
	{Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:25.145012  908785 out.go:179] * Starting "no-preload-954807" primary control-plane node in "no-preload-954807" cluster
	I1026 15:18:25.147872  908785 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:18:25.150783  908785 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:18:25.153691  908785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:25.153844  908785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:18:25.154159  908785 cache.go:107] acquiring lock: {Name:mkbe2086c35e9fcbe8c03bdef4b41f05ca228154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154244  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:18:25.154253  908785 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.981µs
	I1026 15:18:25.154266  908785 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:18:25.154278  908785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:18:25.154523  908785 cache.go:107] acquiring lock: {Name:mk2325fad129f4b7d5aa09cccfdaa3da809a73fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154591  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:18:25.154599  908785 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 83.743µs
	I1026 15:18:25.154607  908785 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:18:25.154618  908785 cache.go:107] acquiring lock: {Name:mk54c57481d4cb891842b1b352451c8a69a47281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154662  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:18:25.154672  908785 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 56.033µs
	I1026 15:18:25.154686  908785 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:18:25.154696  908785 cache.go:107] acquiring lock: {Name:mk5a8cbd33cc84011ebd29296028bb78893eefc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154727  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:18:25.154731  908785 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 36.53µs
	I1026 15:18:25.154737  908785 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:18:25.154746  908785 cache.go:107] acquiring lock: {Name:mkaf3dfd27f1d15aad668c191c7cc85c71d2c9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154771  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:18:25.154776  908785 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.376µs
	I1026 15:18:25.154782  908785 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:18:25.154792  908785 cache.go:107] acquiring lock: {Name:mk964a36cda2ac1ad4a9006d14be02c6bd71c41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.154916  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:18:25.154923  908785 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 131.685µs
	I1026 15:18:25.154929  908785 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:18:25.154963  908785 cache.go:107] acquiring lock: {Name:mkef4d9c96ab97f5a848fa8d925b343812fa37ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.155004  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:18:25.155014  908785 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 71.73µs
	I1026 15:18:25.155020  908785 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:18:25.155031  908785 cache.go:107] acquiring lock: {Name:mkc8d2557eb259bb5390e2f2db4396a6aec79411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.155060  908785 cache.go:115] /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:18:25.155065  908785 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 35.389µs
	I1026 15:18:25.155076  908785 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:18:25.155087  908785 cache.go:87] Successfully saved all images to host disk.
	I1026 15:18:25.186482  908785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:18:25.186502  908785 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:18:25.186515  908785 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:18:25.186538  908785 start.go:360] acquireMachinesLock for no-preload-954807: {Name:mk3de11c10d64abd2c458c411445bde4bf32881c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:18:25.186600  908785 start.go:364] duration metric: took 46.409µs to acquireMachinesLock for "no-preload-954807"
	I1026 15:18:25.186620  908785 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:18:25.186626  908785 fix.go:54] fixHost starting: 
	I1026 15:18:25.186892  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:25.218587  908785 fix.go:112] recreateIfNeeded on no-preload-954807: state=Stopped err=<nil>
	W1026 15:18:25.218633  908785 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:18:23.824889  906105 out.go:252]   - Booting up control plane ...
	I1026 15:18:23.825002  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:23.825084  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:23.826750  906105 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:23.843130  906105 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:23.843590  906105 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:23.851900  906105 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:23.852216  906105 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:23.852513  906105 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:24.001209  906105 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:24.001367  906105 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:25.996925  906105 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000832133s
	I1026 15:18:26.000302  906105 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:26.000400  906105 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1026 15:18:26.000511  906105 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:26.000594  906105 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:25.221939  908785 out.go:252] * Restarting existing docker container for "no-preload-954807" ...
	I1026 15:18:25.222028  908785 cli_runner.go:164] Run: docker start no-preload-954807
	I1026 15:18:25.539012  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:25.573935  908785 kic.go:430] container "no-preload-954807" state is running.
	I1026 15:18:25.574383  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:25.603715  908785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/config.json ...
	I1026 15:18:25.604226  908785 machine.go:93] provisionDockerMachine start ...
	I1026 15:18:25.604316  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:25.634297  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:25.634626  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:25.634636  908785 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:18:25.636397  908785 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:18:28.841282  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:18:28.841360  908785 ubuntu.go:182] provisioning hostname "no-preload-954807"
	I1026 15:18:28.841444  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:28.866436  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:28.866762  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:28.866774  908785 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-954807 && echo "no-preload-954807" | sudo tee /etc/hostname
	I1026 15:18:29.069155  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-954807
	
	I1026 15:18:29.069302  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:29.098780  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:29.099104  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:29.099122  908785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-954807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-954807/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-954807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:18:29.276929  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:29.276952  908785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:18:29.276983  908785 ubuntu.go:190] setting up certificates
	I1026 15:18:29.276993  908785 provision.go:84] configureAuth start
	I1026 15:18:29.277060  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:29.299794  908785 provision.go:143] copyHostCerts
	I1026 15:18:29.299860  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:18:29.299879  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:18:29.299957  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:18:29.300067  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:18:29.300072  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:18:29.300099  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:18:29.300159  908785 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:18:29.300168  908785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:18:29.300193  908785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:18:29.300245  908785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.no-preload-954807 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-954807]
	I1026 15:18:30.781617  906105 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.780617084s
	I1026 15:18:29.899785  908785 provision.go:177] copyRemoteCerts
	I1026 15:18:29.899900  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:18:29.899970  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:29.942702  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:30.078143  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:18:30.113207  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:18:30.146061  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:18:30.178703  908785 provision.go:87] duration metric: took 901.687509ms to configureAuth
	I1026 15:18:30.178771  908785 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:18:30.178995  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:30.179148  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.207087  908785 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:30.207408  908785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33847 <nil> <nil>}
	I1026 15:18:30.207425  908785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:18:30.676969  908785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:18:30.677026  908785 machine.go:96] duration metric: took 5.072780445s to provisionDockerMachine
	I1026 15:18:30.677052  908785 start.go:293] postStartSetup for "no-preload-954807" (driver="docker")
	I1026 15:18:30.677077  908785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:18:30.677149  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:18:30.677252  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.710413  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:30.823871  908785 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:18:30.827555  908785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:18:30.827587  908785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:18:30.827599  908785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:18:30.827656  908785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:18:30.827744  908785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:18:30.827864  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:18:30.838700  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:18:30.871356  908785 start.go:296] duration metric: took 194.275536ms for postStartSetup
	I1026 15:18:30.871461  908785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:18:30.871518  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:30.902387  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.034591  908785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:18:31.045225  908785 fix.go:56] duration metric: took 5.858591617s for fixHost
	I1026 15:18:31.045253  908785 start.go:83] releasing machines lock for "no-preload-954807", held for 5.85864381s
	I1026 15:18:31.045332  908785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-954807
	I1026 15:18:31.106399  908785 ssh_runner.go:195] Run: cat /version.json
	I1026 15:18:31.106456  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:31.106711  908785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:18:31.106777  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:31.151426  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.158586  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:31.396049  908785 ssh_runner.go:195] Run: systemctl --version
	I1026 15:18:31.403261  908785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:18:31.469937  908785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:18:31.482908  908785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:18:31.483041  908785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:18:31.493995  908785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:18:31.494066  908785 start.go:495] detecting cgroup driver to use...
	I1026 15:18:31.494113  908785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:18:31.494187  908785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:18:31.521177  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:18:31.541265  908785 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:18:31.541370  908785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:18:31.569119  908785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:18:31.584298  908785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:18:31.790771  908785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:18:32.003146  908785 docker.go:234] disabling docker service ...
	I1026 15:18:32.003270  908785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:18:32.027531  908785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:18:32.052390  908785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:18:32.244277  908785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:18:32.429463  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:18:32.445776  908785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:18:32.465349  908785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:18:32.465428  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.478857  908785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:18:32.478978  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.488961  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.499025  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.509768  908785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:18:32.519485  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.529990  908785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.539869  908785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:32.550905  908785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:18:32.559187  908785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:18:32.568293  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:32.731012  908785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:18:32.890143  908785 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:18:32.890243  908785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:18:32.895296  908785 start.go:563] Will wait 60s for crictl version
	I1026 15:18:32.895370  908785 ssh_runner.go:195] Run: which crictl
	I1026 15:18:32.899632  908785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:18:32.959445  908785 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:18:32.959551  908785 ssh_runner.go:195] Run: crio --version
	I1026 15:18:32.999198  908785 ssh_runner.go:195] Run: crio --version
	I1026 15:18:33.053114  908785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:18:32.381923  906105 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.381595886s
	I1026 15:18:34.004615  906105 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.004015537s
	I1026 15:18:34.039440  906105 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:34.060957  906105 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:34.093820  906105 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:34.094029  906105 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-494684 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:34.116373  906105 kubeadm.go:318] [bootstrap-token] Using token: opo3lq.zbfbsr53k4i0zecq
	I1026 15:18:33.056258  908785 cli_runner.go:164] Run: docker network inspect no-preload-954807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:18:33.077802  908785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:18:33.083627  908785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:33.094756  908785 kubeadm.go:883] updating cluster {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:18:33.094867  908785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:33.094911  908785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:33.140777  908785 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:18:33.140799  908785 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:18:33.140815  908785 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:18:33.140916  908785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-954807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:18:33.140993  908785 ssh_runner.go:195] Run: crio config
	I1026 15:18:33.234362  908785 cni.go:84] Creating CNI manager for ""
	I1026 15:18:33.234382  908785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:33.234396  908785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:18:33.234442  908785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-954807 NodeName:no-preload-954807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:18:33.234611  908785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-954807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:18:33.234704  908785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:18:33.244949  908785 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:18:33.245042  908785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:18:33.252734  908785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:18:33.266334  908785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:18:33.280280  908785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:18:33.300014  908785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:18:33.305316  908785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:33.315583  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:33.467826  908785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:33.491186  908785 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807 for IP: 192.168.85.2
	I1026 15:18:33.491220  908785 certs.go:195] generating shared ca certs ...
	I1026 15:18:33.491258  908785 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:33.491442  908785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:18:33.491517  908785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:18:33.491547  908785 certs.go:257] generating profile certs ...
	I1026 15:18:33.491665  908785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.key
	I1026 15:18:33.491771  908785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key.274c6805
	I1026 15:18:33.491845  908785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key
	I1026 15:18:33.492003  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:18:33.492056  908785 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:18:33.492084  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:18:33.492115  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:18:33.492158  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:18:33.492198  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:18:33.492264  908785 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:18:33.493002  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:18:33.513517  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:18:33.532884  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:18:33.555231  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:18:33.579754  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:18:33.602447  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:18:33.628293  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:18:33.684754  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:18:33.753264  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:18:33.821238  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:18:33.843108  908785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:18:33.862371  908785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:18:33.878516  908785 ssh_runner.go:195] Run: openssl version
	I1026 15:18:33.885509  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:18:33.895167  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.900931  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.901140  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:18:33.967665  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:18:33.976773  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:18:33.985438  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:18:33.990423  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:18:33.990496  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:18:34.052535  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:18:34.062937  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:18:34.072240  908785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.076658  908785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.076793  908785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:34.127445  908785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:18:34.136993  908785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:18:34.141905  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:18:34.197715  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:18:34.255022  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:18:34.321728  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:18:34.389895  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:18:34.548526  908785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:18:34.681856  908785 kubeadm.go:400] StartCluster: {Name:no-preload-954807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-954807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:34.681971  908785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:18:34.682063  908785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:18:34.783414  908785 cri.go:89] found id: "c4a70523738c5928dbc426321e709bc3f584fea33551f4eb59b502e1025996b6"
	I1026 15:18:34.783566  908785 cri.go:89] found id: "cb2dbcb5faf83c357e52fb2cc1dc056903ef6c7a624e8937bd9f66d2d236947d"
	I1026 15:18:34.783587  908785 cri.go:89] found id: "62ad6fae814dc7d1b1e043a7bf0089b643c2e90cbd6cd490f9e479c2da0be959"
	I1026 15:18:34.783621  908785 cri.go:89] found id: "1eb364639f4fd686958c9dceac397e78d78cc5b630b9e6290b2e255e866e1ac4"
	I1026 15:18:34.783639  908785 cri.go:89] found id: ""
	I1026 15:18:34.783719  908785 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:18:34.811816  908785 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:18:34.812057  908785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:18:34.827966  908785 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:18:34.828092  908785 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:18:34.828177  908785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:18:34.843255  908785 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:18:34.843698  908785 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-954807" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:34.843791  908785 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-954807" cluster setting kubeconfig missing "no-preload-954807" context setting]
	I1026 15:18:34.844059  908785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.845642  908785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:18:34.871634  908785 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 15:18:34.871666  908785 kubeadm.go:601] duration metric: took 43.554458ms to restartPrimaryControlPlane
	I1026 15:18:34.871675  908785 kubeadm.go:402] duration metric: took 189.829653ms to StartCluster
	I1026 15:18:34.871690  908785 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.871749  908785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:34.872330  908785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:34.872519  908785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:34.873018  908785 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:34.873111  908785 addons.go:69] Setting storage-provisioner=true in profile "no-preload-954807"
	I1026 15:18:34.873126  908785 addons.go:238] Setting addon storage-provisioner=true in "no-preload-954807"
	W1026 15:18:34.873137  908785 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:18:34.873163  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.873189  908785 config.go:182] Loaded profile config "no-preload-954807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:34.873264  908785 addons.go:69] Setting dashboard=true in profile "no-preload-954807"
	I1026 15:18:34.873297  908785 addons.go:238] Setting addon dashboard=true in "no-preload-954807"
	W1026 15:18:34.873336  908785 addons.go:247] addon dashboard should already be in state true
	I1026 15:18:34.873368  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.873660  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.877946  908785 addons.go:69] Setting default-storageclass=true in profile "no-preload-954807"
	I1026 15:18:34.878023  908785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-954807"
	I1026 15:18:34.877565  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.878787  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.877575  908785 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:34.888833  908785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:34.921307  908785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:18:34.925761  908785 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:34.925783  908785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:34.925866  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:34.941729  908785 addons.go:238] Setting addon default-storageclass=true in "no-preload-954807"
	W1026 15:18:34.941762  908785 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:18:34.941790  908785 host.go:66] Checking if "no-preload-954807" exists ...
	I1026 15:18:34.942216  908785 cli_runner.go:164] Run: docker container inspect no-preload-954807 --format={{.State.Status}}
	I1026 15:18:34.950093  908785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:18:34.956801  908785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:18:34.119508  906105 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:34.119644  906105 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:34.125645  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:34.136618  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:34.144003  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:34.155143  906105 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:34.162423  906105 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:34.413457  906105 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:35.074961  906105 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:35.413379  906105 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:35.414997  906105 kubeadm.go:318] 
	I1026 15:18:35.415072  906105 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:35.415078  906105 kubeadm.go:318] 
	I1026 15:18:35.415155  906105 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:35.415160  906105 kubeadm.go:318] 
	I1026 15:18:35.415185  906105 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:35.419772  906105 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:35.419856  906105 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:35.419905  906105 kubeadm.go:318] 
	I1026 15:18:35.420002  906105 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:35.420007  906105 kubeadm.go:318] 
	I1026 15:18:35.420066  906105 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:35.420070  906105 kubeadm.go:318] 
	I1026 15:18:35.420148  906105 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:35.420235  906105 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:35.420314  906105 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:35.420324  906105 kubeadm.go:318] 
	I1026 15:18:35.420408  906105 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:35.420488  906105 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:35.420497  906105 kubeadm.go:318] 
	I1026 15:18:35.420612  906105 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token opo3lq.zbfbsr53k4i0zecq \
	I1026 15:18:35.420744  906105 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:18:35.420794  906105 kubeadm.go:318] 	--control-plane 
	I1026 15:18:35.420800  906105 kubeadm.go:318] 
	I1026 15:18:35.420895  906105 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:35.420900  906105 kubeadm.go:318] 
	I1026 15:18:35.420998  906105 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token opo3lq.zbfbsr53k4i0zecq \
	I1026 15:18:35.421110  906105 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:18:35.440042  906105 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:18:35.440280  906105 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:18:35.440391  906105 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:35.440407  906105 cni.go:84] Creating CNI manager for ""
	I1026 15:18:35.440414  906105 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:18:35.444207  906105 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:18:35.447185  906105 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:18:35.456310  906105 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:18:35.456334  906105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:18:35.507388  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:18:36.090917  906105 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:36.091006  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:36.091050  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-494684 minikube.k8s.io/updated_at=2025_10_26T15_18_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=default-k8s-diff-port-494684 minikube.k8s.io/primary=true
	I1026 15:18:36.514936  906105 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:36.515052  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:37.015410  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:37.515116  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:38.015362  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:38.515615  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:39.015108  906105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:39.233922  906105 kubeadm.go:1113] duration metric: took 3.142974166s to wait for elevateKubeSystemPrivileges
	I1026 15:18:39.233954  906105 kubeadm.go:402] duration metric: took 23.046817686s to StartCluster
	I1026 15:18:39.233975  906105 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:39.234032  906105 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:18:39.235069  906105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:39.235311  906105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:18:39.235322  906105 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:39.235586  906105 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:39.235621  906105 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:39.235684  906105 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-494684"
	I1026 15:18:39.235698  906105 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-494684"
	I1026 15:18:39.235723  906105 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:18:39.236178  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.236758  906105 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-494684"
	I1026 15:18:39.236781  906105 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-494684"
	I1026 15:18:39.237117  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.240502  906105 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:39.252908  906105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:39.270053  906105 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-494684"
	I1026 15:18:39.270095  906105 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:18:39.270522  906105 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:18:39.282068  906105 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:18:34.959584  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:18:34.959611  908785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:18:34.959687  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:34.980722  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:34.990491  908785 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:34.990523  908785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:34.990600  908785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-954807
	I1026 15:18:35.026564  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:35.044932  908785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/no-preload-954807/id_rsa Username:docker}
	I1026 15:18:35.366750  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:18:35.366822  908785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:18:35.430297  908785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:35.447981  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:35.526736  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:18:35.526816  908785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:18:35.541300  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:35.557089  908785 node_ready.go:35] waiting up to 6m0s for node "no-preload-954807" to be "Ready" ...
	I1026 15:18:35.640785  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:18:35.640819  908785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:18:35.771188  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:18:35.771215  908785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:18:35.825305  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:18:35.825332  908785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:18:35.945173  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:18:35.945241  908785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:18:36.043908  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:18:36.043985  908785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:18:36.074085  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:18:36.074164  908785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:18:36.114626  908785 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:18:36.114697  908785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:18:36.162322  908785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:18:39.285064  906105 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:39.285091  906105 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:39.285174  906105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:18:39.313693  906105 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:39.313726  906105 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:39.313788  906105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:18:39.329825  906105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:18:39.352237  906105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:18:39.833145  906105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:18:39.835130  906105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:39.865906  906105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:39.891557  906105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:41.038716  906105 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.205461428s)
	I1026 15:18:41.038845  906105 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 15:18:41.038811  906105 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203616764s)
	I1026 15:18:41.039823  906105 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:18:41.560927  906105 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-494684" context rescaled to 1 replicas
	I1026 15:18:41.767543  906105 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.901592798s)
	I1026 15:18:41.767597  906105 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.875965389s)
	I1026 15:18:41.789656  906105 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:18:41.792471  906105 addons.go:514] duration metric: took 2.556838269s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:18:42.009830  908785 node_ready.go:49] node "no-preload-954807" is "Ready"
	I1026 15:18:42.009866  908785 node_ready.go:38] duration metric: took 6.452696965s for node "no-preload-954807" to be "Ready" ...
	I1026 15:18:42.009885  908785 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:42.009955  908785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:44.074337  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.626316807s)
	I1026 15:18:44.074430  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.533054521s)
	I1026 15:18:44.093634  908785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.931220048s)
	I1026 15:18:44.093821  908785 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.083847423s)
	I1026 15:18:44.093842  908785 api_server.go:72] duration metric: took 9.221303285s to wait for apiserver process to appear ...
	I1026 15:18:44.093849  908785 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:44.093871  908785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:18:44.096535  908785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-954807 addons enable metrics-server
	
	I1026 15:18:44.100991  908785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:18:44.103937  908785 addons.go:514] duration metric: took 9.230903875s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:18:44.105206  908785 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:18:44.106296  908785 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:44.106318  908785 api_server.go:131] duration metric: took 12.458566ms to wait for apiserver health ...
	I1026 15:18:44.106327  908785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:44.109695  908785 system_pods.go:59] 8 kube-system pods found
	I1026 15:18:44.109733  908785 system_pods.go:61] "coredns-66bc5c9577-7xjmh" [7c8cb8b7-9202-4e22-bc6b-db89e79c7589] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:44.109742  908785 system_pods.go:61] "etcd-no-preload-954807" [52c031cf-4dde-4c04-8883-80b3a9be7df3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:44.109750  908785 system_pods.go:61] "kindnet-9grs2" [24f115af-1173-42c3-a38d-af5044b515d6] Running
	I1026 15:18:44.109757  908785 system_pods.go:61] "kube-apiserver-no-preload-954807" [19b0fdfa-be5b-4363-91e4-5e49e816a746] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:44.109764  908785 system_pods.go:61] "kube-controller-manager-no-preload-954807" [cd19e3f8-151b-4b3e-b857-571a59f57f44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:44.109768  908785 system_pods.go:61] "kube-proxy-q8nns" [f407a5bf-332b-4393-8250-e22d40da01f9] Running
	I1026 15:18:44.109775  908785 system_pods.go:61] "kube-scheduler-no-preload-954807" [ddb87e7c-a779-4c46-b2af-bfe48e908828] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:44.109780  908785 system_pods.go:61] "storage-provisioner" [5cb08c14-ee23-4e69-b4b7-e5ef184ed78e] Running
	I1026 15:18:44.109786  908785 system_pods.go:74] duration metric: took 3.453281ms to wait for pod list to return data ...
	I1026 15:18:44.109794  908785 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:44.112368  908785 default_sa.go:45] found service account: "default"
	I1026 15:18:44.112388  908785 default_sa.go:55] duration metric: took 2.586901ms for default service account to be created ...
	I1026 15:18:44.112396  908785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:44.115134  908785 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:44.115216  908785 system_pods.go:89] "coredns-66bc5c9577-7xjmh" [7c8cb8b7-9202-4e22-bc6b-db89e79c7589] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:44.115250  908785 system_pods.go:89] "etcd-no-preload-954807" [52c031cf-4dde-4c04-8883-80b3a9be7df3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:44.115283  908785 system_pods.go:89] "kindnet-9grs2" [24f115af-1173-42c3-a38d-af5044b515d6] Running
	I1026 15:18:44.115306  908785 system_pods.go:89] "kube-apiserver-no-preload-954807" [19b0fdfa-be5b-4363-91e4-5e49e816a746] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:44.115328  908785 system_pods.go:89] "kube-controller-manager-no-preload-954807" [cd19e3f8-151b-4b3e-b857-571a59f57f44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:44.115352  908785 system_pods.go:89] "kube-proxy-q8nns" [f407a5bf-332b-4393-8250-e22d40da01f9] Running
	I1026 15:18:44.115383  908785 system_pods.go:89] "kube-scheduler-no-preload-954807" [ddb87e7c-a779-4c46-b2af-bfe48e908828] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:44.115402  908785 system_pods.go:89] "storage-provisioner" [5cb08c14-ee23-4e69-b4b7-e5ef184ed78e] Running
	I1026 15:18:44.115424  908785 system_pods.go:126] duration metric: took 3.020964ms to wait for k8s-apps to be running ...
	I1026 15:18:44.115449  908785 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:44.115528  908785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:44.132618  908785 system_svc.go:56] duration metric: took 17.163659ms WaitForService to wait for kubelet
	I1026 15:18:44.132642  908785 kubeadm.go:586] duration metric: took 9.260101546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:44.132663  908785 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:44.135549  908785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:18:44.135577  908785 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:44.135589  908785 node_conditions.go:105] duration metric: took 2.919573ms to run NodePressure ...
	I1026 15:18:44.135602  908785 start.go:241] waiting for startup goroutines ...
	I1026 15:18:44.135610  908785 start.go:246] waiting for cluster config update ...
	I1026 15:18:44.135620  908785 start.go:255] writing updated cluster config ...
	I1026 15:18:44.135912  908785 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:44.139910  908785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:44.143746  908785 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7xjmh" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:43.043031  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:45.079469  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:46.199968  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:48.651556  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:47.542993  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:50.043710  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:52.043878  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:51.150539  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:53.150747  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:54.543184  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:57.043649  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:18:55.650612  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:57.655226  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:18:59.545897  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:02.043891  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:00.154271  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:02.649805  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:04.650562  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:04.543488  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:07.043487  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:07.149715  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:09.650530  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:09.542582  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:11.543176  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:12.149228  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:14.157707  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:14.043223  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:16.043564  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:16.651190  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:19.150299  908785 pod_ready.go:104] pod "coredns-66bc5c9577-7xjmh" is not "Ready", error: <nil>
	W1026 15:19:18.543877  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	W1026 15:19:21.042737  906105 node_ready.go:57] node "default-k8s-diff-port-494684" has "Ready":"False" status (will retry)
	I1026 15:19:21.149393  908785 pod_ready.go:94] pod "coredns-66bc5c9577-7xjmh" is "Ready"
	I1026 15:19:21.149423  908785 pod_ready.go:86] duration metric: took 37.005599421s for pod "coredns-66bc5c9577-7xjmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.152545  908785 pod_ready.go:83] waiting for pod "etcd-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.157626  908785 pod_ready.go:94] pod "etcd-no-preload-954807" is "Ready"
	I1026 15:19:21.157652  908785 pod_ready.go:86] duration metric: took 5.07725ms for pod "etcd-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.160404  908785 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.165211  908785 pod_ready.go:94] pod "kube-apiserver-no-preload-954807" is "Ready"
	I1026 15:19:21.165241  908785 pod_ready.go:86] duration metric: took 4.811401ms for pod "kube-apiserver-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.171007  908785 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.347722  908785 pod_ready.go:94] pod "kube-controller-manager-no-preload-954807" is "Ready"
	I1026 15:19:21.347751  908785 pod_ready.go:86] duration metric: took 176.720385ms for pod "kube-controller-manager-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.547746  908785 pod_ready.go:83] waiting for pod "kube-proxy-q8nns" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:21.947461  908785 pod_ready.go:94] pod "kube-proxy-q8nns" is "Ready"
	I1026 15:19:21.947490  908785 pod_ready.go:86] duration metric: took 399.680606ms for pod "kube-proxy-q8nns" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.147722  908785 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.548568  908785 pod_ready.go:94] pod "kube-scheduler-no-preload-954807" is "Ready"
	I1026 15:19:22.548648  908785 pod_ready.go:86] duration metric: took 400.89538ms for pod "kube-scheduler-no-preload-954807" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:22.548677  908785 pod_ready.go:40] duration metric: took 38.40866909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:22.645734  908785 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:19:22.649853  908785 out.go:179] * Done! kubectl is now configured to use "no-preload-954807" cluster and "default" namespace by default
	I1026 15:19:22.543199  906105 node_ready.go:49] node "default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:22.543232  906105 node_ready.go:38] duration metric: took 41.503374902s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:19:22.543247  906105 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:19:22.543322  906105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:19:22.558433  906105 api_server.go:72] duration metric: took 43.323081637s to wait for apiserver process to appear ...
	I1026 15:19:22.558456  906105 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:19:22.558476  906105 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1026 15:19:22.574126  906105 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1026 15:19:22.576212  906105 api_server.go:141] control plane version: v1.34.1
	I1026 15:19:22.576245  906105 api_server.go:131] duration metric: took 17.782398ms to wait for apiserver health ...
	I1026 15:19:22.576254  906105 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:19:22.585670  906105 system_pods.go:59] 8 kube-system pods found
	I1026 15:19:22.585709  906105 system_pods.go:61] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.585717  906105 system_pods.go:61] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.585725  906105 system_pods.go:61] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.585730  906105 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.585736  906105 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.585746  906105 system_pods.go:61] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.585754  906105 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.585761  906105 system_pods.go:61] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.585776  906105 system_pods.go:74] duration metric: took 9.51752ms to wait for pod list to return data ...
	I1026 15:19:22.585789  906105 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:19:22.589014  906105 default_sa.go:45] found service account: "default"
	I1026 15:19:22.589043  906105 default_sa.go:55] duration metric: took 3.244286ms for default service account to be created ...
	I1026 15:19:22.589054  906105 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:19:22.597482  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:22.597521  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.597529  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.597536  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.597541  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.597546  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.597551  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.597557  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.597566  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.597592  906105 retry.go:31] will retry after 270.93989ms: missing components: kube-dns
	I1026 15:19:22.890382  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:22.890422  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:22.890429  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:22.890436  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:22.890442  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:22.890447  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:22.890454  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:22.890458  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:22.890466  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:22.890480  906105 retry.go:31] will retry after 311.29252ms: missing components: kube-dns
	I1026 15:19:23.207300  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.207338  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:23.207345  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.207352  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.207356  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.207360  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.207365  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.207369  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.207375  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:23.207394  906105 retry.go:31] will retry after 338.060587ms: missing components: kube-dns
	I1026 15:19:23.549179  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.549216  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:19:23.549224  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.549231  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.549235  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.549239  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.549244  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.549248  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.549254  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:19:23.549269  906105 retry.go:31] will retry after 395.592761ms: missing components: kube-dns
	I1026 15:19:23.949803  906105 system_pods.go:86] 8 kube-system pods found
	I1026 15:19:23.949839  906105 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Running
	I1026 15:19:23.949846  906105 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running
	I1026 15:19:23.949854  906105 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:19:23.949861  906105 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running
	I1026 15:19:23.949866  906105 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running
	I1026 15:19:23.949874  906105 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:19:23.949879  906105 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running
	I1026 15:19:23.949884  906105 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Running
	I1026 15:19:23.949892  906105 system_pods.go:126] duration metric: took 1.360831952s to wait for k8s-apps to be running ...
	I1026 15:19:23.949905  906105 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:19:23.949966  906105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:19:23.964269  906105 system_svc.go:56] duration metric: took 14.355022ms WaitForService to wait for kubelet
	I1026 15:19:23.964297  906105 kubeadm.go:586] duration metric: took 44.728950966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:19:23.964316  906105 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:19:23.967634  906105 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:19:23.967669  906105 node_conditions.go:123] node cpu capacity is 2
	I1026 15:19:23.967684  906105 node_conditions.go:105] duration metric: took 3.327873ms to run NodePressure ...
	I1026 15:19:23.967696  906105 start.go:241] waiting for startup goroutines ...
	I1026 15:19:23.967745  906105 start.go:246] waiting for cluster config update ...
	I1026 15:19:23.967757  906105 start.go:255] writing updated cluster config ...
	I1026 15:19:23.968071  906105 ssh_runner.go:195] Run: rm -f paused
	I1026 15:19:23.972391  906105 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:23.978846  906105 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.984309  906105 pod_ready.go:94] pod "coredns-66bc5c9577-zm8vb" is "Ready"
	I1026 15:19:23.984341  906105 pod_ready.go:86] duration metric: took 5.466432ms for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.987133  906105 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.992540  906105 pod_ready.go:94] pod "etcd-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:23.992578  906105 pod_ready.go:86] duration metric: took 5.419399ms for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.995145  906105 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:23.999951  906105 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:23.999979  906105 pod_ready.go:86] duration metric: took 4.806707ms for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.003124  906105 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.376257  906105 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:24.376287  906105 pod_ready.go:86] duration metric: took 373.130356ms for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.576393  906105 pod_ready.go:83] waiting for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:24.976974  906105 pod_ready.go:94] pod "kube-proxy-nbcd6" is "Ready"
	I1026 15:19:24.977002  906105 pod_ready.go:86] duration metric: took 400.540602ms for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.178270  906105 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.576418  906105 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-494684" is "Ready"
	I1026 15:19:25.576444  906105 pod_ready.go:86] duration metric: took 398.150209ms for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:19:25.576456  906105 pod_ready.go:40] duration metric: took 1.604033075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:19:25.629832  906105 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:19:25.636667  906105 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-494684" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:19:22 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:22.775781129Z" level=info msg="Created container b3af798f0b8e551db17cabb2ad8bec118143d8dbff11e5d73fc9957295b1062b: kube-system/coredns-66bc5c9577-zm8vb/coredns" id=cc0b7349-755b-494a-b1af-3cde2aaa81c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:22 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:22.776679059Z" level=info msg="Starting container: b3af798f0b8e551db17cabb2ad8bec118143d8dbff11e5d73fc9957295b1062b" id=f563d0fb-55c9-448c-b881-dc4440fe2109 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:19:22 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:22.78897014Z" level=info msg="Started container" PID=1717 containerID=b3af798f0b8e551db17cabb2ad8bec118143d8dbff11e5d73fc9957295b1062b description=kube-system/coredns-66bc5c9577-zm8vb/coredns id=f563d0fb-55c9-448c-b881-dc4440fe2109 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa160ee9d2021b390696868db797278007d30fd6f9a8d33b5b568de1047251d5
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.103221009Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fcbdcb90-4883-4471-b3fa-78d6d00d7b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.103304235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.112073439Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb UID:0f11c185-ade9-4c11-afe9-250f741f209d NetNS:/var/run/netns/4d7e1713-3f9d-4c94-848e-c725df2abb28 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d1e0}] Aliases:map[]}"
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.112291747Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.124065579Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb UID:0f11c185-ade9-4c11-afe9-250f741f209d NetNS:/var/run/netns/4d7e1713-3f9d-4c94-848e-c725df2abb28 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d1e0}] Aliases:map[]}"
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.124373463Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.132175239Z" level=info msg="Ran pod sandbox f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb with infra container: default/busybox/POD" id=fcbdcb90-4883-4471-b3fa-78d6d00d7b7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.133611481Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f52d977a-694c-4480-8761-f0fe776488c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.133849818Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f52d977a-694c-4480-8761-f0fe776488c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.133903627Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f52d977a-694c-4480-8761-f0fe776488c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.136109157Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=190e74a9-7ef3-4e55-b993-8c91c07cd42c name=/runtime.v1.ImageService/PullImage
	Oct 26 15:19:27 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:27.138896749Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.130254196Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=190e74a9-7ef3-4e55-b993-8c91c07cd42c name=/runtime.v1.ImageService/PullImage
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.133074887Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e27d889c-68ca-49e9-9d91-ff88e007558b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.136445246Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74dc520a-17f7-4659-b9d6-d84a46d033d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.149082833Z" level=info msg="Creating container: default/busybox/busybox" id=6fdcad5b-9905-4627-ad04-9d75af6ad7cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.149214732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.157145946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.157914118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.176032865Z" level=info msg="Created container 296638338cbf94791ad58c2d96819d8ac8fb91aceb4ddb4024911936b0000c9d: default/busybox/busybox" id=6fdcad5b-9905-4627-ad04-9d75af6ad7cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.178423546Z" level=info msg="Starting container: 296638338cbf94791ad58c2d96819d8ac8fb91aceb4ddb4024911936b0000c9d" id=738de8b1-1155-46fd-94dc-ae648fee2943 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:19:29 default-k8s-diff-port-494684 crio[837]: time="2025-10-26T15:19:29.180334156Z" level=info msg="Started container" PID=1771 containerID=296638338cbf94791ad58c2d96819d8ac8fb91aceb4ddb4024911936b0000c9d description=default/busybox/busybox id=738de8b1-1155-46fd-94dc-ae648fee2943 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	296638338cbf9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   f0a1e41fa00e8       busybox                                                default
	b3af798f0b8e5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   fa160ee9d2021       coredns-66bc5c9577-zm8vb                               kube-system
	86f494d2118dc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   2b08fde76b163       storage-provisioner                                    kube-system
	f8f1c8e05adff       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   ea5b909aca60f       kube-proxy-nbcd6                                       kube-system
	6f903b7a49211       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   27f182e68ec2a       kindnet-bfc62                                          kube-system
	fa61a928d91f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   1a1de33c5e495       kube-scheduler-default-k8s-diff-port-494684            kube-system
	123d628afaf5b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   0f325ee2bc6e8       kube-controller-manager-default-k8s-diff-port-494684   kube-system
	b5c423dfd6e22       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e7aa5373f99e1       kube-apiserver-default-k8s-diff-port-494684            kube-system
	e3dc9dcea6835       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6584611e1a632       etcd-default-k8s-diff-port-494684                      kube-system
	
	
	==> coredns [b3af798f0b8e551db17cabb2ad8bec118143d8dbff11e5d73fc9957295b1062b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54349 - 58340 "HINFO IN 5198253223447507500.896166039854565050. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013356495s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-494684
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-494684
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-494684
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-494684
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:19:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:19:36 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:19:36 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:19:36 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:19:36 +0000   Sun, 26 Oct 2025 15:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-494684
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a6e20c02-f12b-4169-8ea1-8297398ff607
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-zm8vb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-494684                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-bfc62                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-default-k8s-diff-port-494684             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-494684    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-nbcd6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-default-k8s-diff-port-494684             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-494684 event: Registered Node default-k8s-diff-port-494684 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-494684 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 14:56] overlayfs: idmapped layers are currently not supported
	[Oct26 14:58] overlayfs: idmapped layers are currently not supported
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3dc9dcea68354b5764a0223dbe71774cef81f6cc82dc2a4500c1970110910b3] <==
	{"level":"warn","ts":"2025-10-26T15:18:29.294818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.321169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.352048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.363175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.375385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.420921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.501424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.519447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.569158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.641760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.693504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.732611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.834237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.851362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.870320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.928596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:29.955163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.028612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.037559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.068227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.147246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.204516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.244425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.279657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:18:30.446234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52098","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:19:37 up  5:02,  0 user,  load average: 3.05, 3.48, 3.06
	Linux default-k8s-diff-port-494684 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f903b7a492113e2711e86cf8285813ff3c4ddaf8f5f1c85215706c86551383f] <==
	I1026 15:18:41.563892       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:18:41.633776       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:18:41.633957       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:18:41.633971       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:18:41.633986       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:18:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:18:41.826360       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:18:41.826448       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:18:41.826481       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:18:41.826635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:19:11.827234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:19:11.827240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:19:11.827354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:19:11.827425       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:19:13.427483       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:19:13.427514       1 metrics.go:72] Registering metrics
	I1026 15:19:13.427591       1 controller.go:711] "Syncing nftables rules"
	I1026 15:19:21.833005       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:19:21.833061       1 main.go:301] handling current node
	I1026 15:19:31.825952       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:19:31.825986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b5c423dfd6e22e0014964b19deec97146d81c35dbda9f545dc6cd90a017e19f8] <==
	I1026 15:18:31.976263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:18:31.976269       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:18:31.990968       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:18:32.003340       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:18:32.018201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:18:32.025566       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:32.029043       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:18:32.488750       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:18:32.521000       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:18:32.521091       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:18:33.594500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:18:33.674248       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:18:33.780891       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:18:33.790011       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 15:18:33.791434       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:18:33.801559       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:18:33.919389       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:18:34.988178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:18:35.074017       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:18:35.094869       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:18:39.726602       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:18:39.777238       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:18:39.929687       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:40.164680       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1026 15:19:35.084405       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:40454: use of closed network connection
	
	
	==> kube-controller-manager [123d628afaf5b7af211ff33371193bdc5c0950f80301a93440106e714dbfb702] <==
	I1026 15:18:38.985592       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:18:38.993344       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:18:38.997123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:18:38.997230       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:18:38.997263       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:18:38.997347       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:18:39.026138       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:18:39.026197       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:18:39.026222       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:18:39.026306       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:18:39.026390       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-494684"
	I1026 15:18:39.026436       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:18:39.026478       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:18:39.026504       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:18:39.027595       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:18:39.027647       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:18:39.027689       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:18:39.027725       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:18:39.027975       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:18:39.028079       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:18:39.028105       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:18:39.032782       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:18:39.033442       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:18:39.044381       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 15:19:24.033826       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f8f1c8e05adffa2e391cd23e61b513443a0585f7265288d0ee0b4cd6c2d71460] <==
	I1026 15:18:41.812532       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:18:41.886275       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:18:42.000476       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:18:42.000570       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:18:42.000683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:18:42.073159       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:18:42.073334       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:18:42.083232       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:18:42.083706       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:18:42.083734       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:42.098784       1 config.go:200] "Starting service config controller"
	I1026 15:18:42.098908       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:18:42.098962       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:18:42.099012       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:18:42.099053       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:18:42.099101       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:18:42.105617       1 config.go:309] "Starting node config controller"
	I1026 15:18:42.106323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:18:42.113971       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:18:42.199569       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:18:42.199624       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:18:42.199654       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fa61a928d91f4fe7b4b3e73391c7e156be5b3eb87b0e670b983af1a8ea0b2599] <==
	I1026 15:18:32.336030       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:18:32.337946       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:18:32.338114       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 15:18:32.349080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 15:18:32.371761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:18:32.371899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:18:32.371977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:18:32.372425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:18:32.372850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:18:32.372977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:18:32.373094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:18:32.373217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:18:32.373324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:18:32.373374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:18:32.373413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:18:32.373452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:18:32.373490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:18:32.373521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:18:32.373553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:18:32.373591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:18:32.373646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:18:32.373665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:18:33.206964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:18:33.236608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1026 15:18:33.837163       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:18:40.170389    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da5e9adf-608b-4892-a105-a03c1dea6660-kube-proxy\") pod \"kube-proxy-nbcd6\" (UID: \"da5e9adf-608b-4892-a105-a03c1dea6660\") " pod="kube-system/kube-proxy-nbcd6"
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:18:40.170523    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/044af459-c8ff-41f0-976f-0d52643cf9fb-cni-cfg\") pod \"kindnet-bfc62\" (UID: \"044af459-c8ff-41f0-976f-0d52643cf9fb\") " pod="kube-system/kindnet-bfc62"
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:18:40.170674    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044af459-c8ff-41f0-976f-0d52643cf9fb-lib-modules\") pod \"kindnet-bfc62\" (UID: \"044af459-c8ff-41f0-976f-0d52643cf9fb\") " pod="kube-system/kindnet-bfc62"
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:18:40.409356    1300 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:18:40.409392    1300 projected.go:196] Error preparing data for projected volume kube-api-access-sk4sv for pod kube-system/kube-proxy-nbcd6: configmap "kube-root-ca.crt" not found
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:18:40.409470    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da5e9adf-608b-4892-a105-a03c1dea6660-kube-api-access-sk4sv podName:da5e9adf-608b-4892-a105-a03c1dea6660 nodeName:}" failed. No retries permitted until 2025-10-26 15:18:40.909445512 +0000 UTC m=+6.039048231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4sv" (UniqueName: "kubernetes.io/projected/da5e9adf-608b-4892-a105-a03c1dea6660-kube-api-access-sk4sv") pod "kube-proxy-nbcd6" (UID: "da5e9adf-608b-4892-a105-a03c1dea6660") : configmap "kube-root-ca.crt" not found
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:18:40.418444    1300 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:18:40.418635    1300 projected.go:196] Error preparing data for projected volume kube-api-access-ptddl for pod kube-system/kindnet-bfc62: configmap "kube-root-ca.crt" not found
	Oct 26 15:18:40 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:18:40.418811    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/044af459-c8ff-41f0-976f-0d52643cf9fb-kube-api-access-ptddl podName:044af459-c8ff-41f0-976f-0d52643cf9fb nodeName:}" failed. No retries permitted until 2025-10-26 15:18:40.918777257 +0000 UTC m=+6.048379977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptddl" (UniqueName: "kubernetes.io/projected/044af459-c8ff-41f0-976f-0d52643cf9fb-kube-api-access-ptddl") pod "kindnet-bfc62" (UID: "044af459-c8ff-41f0-976f-0d52643cf9fb") : configmap "kube-root-ca.crt" not found
	Oct 26 15:18:41 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:18:41.008005    1300 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 15:18:41 default-k8s-diff-port-494684 kubelet[1300]: W1026 15:18:41.317682    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/crio-ea5b909aca60f3603cc8f1100a102d96f106d43d8a58d2b850d47c4f9b7818be WatchSource:0}: Error finding container ea5b909aca60f3603cc8f1100a102d96f106d43d8a58d2b850d47c4f9b7818be: Status 404 returned error can't find the container with id ea5b909aca60f3603cc8f1100a102d96f106d43d8a58d2b850d47c4f9b7818be
	Oct 26 15:18:41 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:18:41.603766    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbcd6" podStartSLOduration=2.603744544 podStartE2EDuration="2.603744544s" podCreationTimestamp="2025-10-26 15:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:41.60347602 +0000 UTC m=+6.733078764" watchObservedRunningTime="2025-10-26 15:18:41.603744544 +0000 UTC m=+6.733347264"
	Oct 26 15:18:45 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:18:45.423021    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bfc62" podStartSLOduration=6.422991932 podStartE2EDuration="6.422991932s" podCreationTimestamp="2025-10-26 15:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:41.665175456 +0000 UTC m=+6.794778193" watchObservedRunningTime="2025-10-26 15:18:45.422991932 +0000 UTC m=+10.552594652"
	Oct 26 15:19:22 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:22.218036    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:19:22 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:22.356954    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6f7r\" (UniqueName: \"kubernetes.io/projected/76a854e4-16a9-4614-a574-43c882aa10b5-kube-api-access-s6f7r\") pod \"storage-provisioner\" (UID: \"76a854e4-16a9-4614-a574-43c882aa10b5\") " pod="kube-system/storage-provisioner"
	Oct 26 15:19:22 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:22.357177    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94c0c5a6-92d9-4c12-ac44-1514a81158fa-config-volume\") pod \"coredns-66bc5c9577-zm8vb\" (UID: \"94c0c5a6-92d9-4c12-ac44-1514a81158fa\") " pod="kube-system/coredns-66bc5c9577-zm8vb"
	Oct 26 15:19:22 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:22.357266    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4krf\" (UniqueName: \"kubernetes.io/projected/94c0c5a6-92d9-4c12-ac44-1514a81158fa-kube-api-access-r4krf\") pod \"coredns-66bc5c9577-zm8vb\" (UID: \"94c0c5a6-92d9-4c12-ac44-1514a81158fa\") " pod="kube-system/coredns-66bc5c9577-zm8vb"
	Oct 26 15:19:22 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:22.357351    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/76a854e4-16a9-4614-a574-43c882aa10b5-tmp\") pod \"storage-provisioner\" (UID: \"76a854e4-16a9-4614-a574-43c882aa10b5\") " pod="kube-system/storage-provisioner"
	Oct 26 15:19:22 default-k8s-diff-port-494684 kubelet[1300]: W1026 15:19:22.609821    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/crio-2b08fde76b163e6984b3852cd75b855cc581fe8a1376051ba714ef524ead2857 WatchSource:0}: Error finding container 2b08fde76b163e6984b3852cd75b855cc581fe8a1376051ba714ef524ead2857: Status 404 returned error can't find the container with id 2b08fde76b163e6984b3852cd75b855cc581fe8a1376051ba714ef524ead2857
	Oct 26 15:19:23 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:23.685957    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.685937248 podStartE2EDuration="42.685937248s" podCreationTimestamp="2025-10-26 15:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:19:23.672259264 +0000 UTC m=+48.801861992" watchObservedRunningTime="2025-10-26 15:19:23.685937248 +0000 UTC m=+48.815539968"
	Oct 26 15:19:25 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:25.893872    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zm8vb" podStartSLOduration=45.893850819 podStartE2EDuration="45.893850819s" podCreationTimestamp="2025-10-26 15:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:19:23.686899893 +0000 UTC m=+48.816502621" watchObservedRunningTime="2025-10-26 15:19:25.893850819 +0000 UTC m=+51.023453547"
	Oct 26 15:19:25 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:19:25.901899    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox\" is forbidden: User \"system:node:default-k8s-diff-port-494684\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'default-k8s-diff-port-494684' and this object" podUID="0f11c185-ade9-4c11-afe9-250f741f209d" pod="default/busybox"
	Oct 26 15:19:25 default-k8s-diff-port-494684 kubelet[1300]: E1026 15:19:25.902440    1300 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-494684\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'default-k8s-diff-port-494684' and this object" logger="UnhandledError" reflector="object-\"default\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 26 15:19:25 default-k8s-diff-port-494684 kubelet[1300]: I1026 15:19:25.989417    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbjj\" (UniqueName: \"kubernetes.io/projected/0f11c185-ade9-4c11-afe9-250f741f209d-kube-api-access-kvbjj\") pod \"busybox\" (UID: \"0f11c185-ade9-4c11-afe9-250f741f209d\") " pod="default/busybox"
	Oct 26 15:19:27 default-k8s-diff-port-494684 kubelet[1300]: W1026 15:19:27.126895    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/crio-f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb WatchSource:0}: Error finding container f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb: Status 404 returned error can't find the container with id f0a1e41fa00e88f5cda67f27cb56c70b2f6d769920fa8cb469bb4fbde267febb
	
	
	==> storage-provisioner [86f494d2118dce6e8592462b3745d298aad47557e7ff277aa4bfc87c91f177f8] <==
	I1026 15:19:22.759712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:19:22.778912       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:19:22.781181       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:19:22.808601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:22.896985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:19:22.897286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:19:22.897456       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d25c15-ba1a-4898-94ee-0ef3b44a7fcb", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-494684_ae68ad00-ecfa-4bac-87f4-591b26278121 became leader
	I1026 15:19:22.900574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-494684_ae68ad00-ecfa-4bac-87f4-591b26278121!
	W1026 15:19:22.910679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:22.918303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:19:23.000925       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-494684_ae68ad00-ecfa-4bac-87f4-591b26278121!
	W1026 15:19:24.921523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:24.926030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:26.929461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:26.934392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:28.937621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:28.942064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:30.944823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:30.956604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:32.959779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:32.964546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:34.975028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:34.985937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:36.989174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:19:37.000421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (301.614403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-810872
helpers_test.go:243: (dbg) docker inspect newest-cni-810872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1",
	        "Created": "2025-10-26T15:19:50.863323675Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 914148,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:19:50.930981075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/hosts",
	        "LogPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1-json.log",
	        "Name": "/newest-cni-810872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-810872:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-810872",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1",
	                "LowerDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-810872",
	                "Source": "/var/lib/docker/volumes/newest-cni-810872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-810872",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-810872",
	                "name.minikube.sigs.k8s.io": "newest-cni-810872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d95d8ccb3c9bfc0f4b7b20a0dce2044d4988f20cd348b6ace8f0f736617496c8",
	            "SandboxKey": "/var/run/docker/netns/d95d8ccb3c9b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-810872": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:b2:67:5d:27:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd72f372b9d59036c2bf74ba038a42769a6a6fe23c0e4f9a4a483ae08bcd16c7",
	                    "EndpointID": "37c94162c3c135075ddeed26985df027fbcc8e3cf6dce8ef64f3c6b22c35a59a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-810872",
	                        "fcebd0173001"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-810872 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-810872 logs -n 25: (1.146588179s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-018497 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p cert-expiration-963871                                                                                                                                                                                                                     │ cert-expiration-963871       │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ delete  │ -p disable-driver-mounts-934812                                                                                                                                                                                                               │ disable-driver-mounts-934812 │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:16 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                                                                                                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                                                                                                    │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-494684 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-494684 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:19:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:19:51.147435  914177 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:19:51.147672  914177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:19:51.147713  914177 out.go:374] Setting ErrFile to fd 2...
	I1026 15:19:51.147731  914177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:19:51.148204  914177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:19:51.148768  914177 out.go:368] Setting JSON to false
	I1026 15:19:51.150038  914177 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18144,"bootTime":1761473848,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:19:51.150148  914177 start.go:141] virtualization:  
	I1026 15:19:51.154781  914177 out.go:179] * [default-k8s-diff-port-494684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:19:51.158396  914177 notify.go:220] Checking for updates...
	I1026 15:19:51.161593  914177 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:19:51.164520  914177 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:19:51.167514  914177 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:19:51.170448  914177 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:19:51.173641  914177 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:19:51.176533  914177 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:19:51.179881  914177 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:19:51.180493  914177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:19:51.231203  914177 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:19:51.231321  914177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:19:51.363510  914177 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-26 15:19:51.353212864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:19:51.363615  914177 docker.go:318] overlay module found
	I1026 15:19:51.368978  914177 out.go:179] * Using the docker driver based on existing profile
	I1026 15:19:51.371812  914177 start.go:305] selected driver: docker
	I1026 15:19:51.371832  914177 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-494684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:19:51.371952  914177 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:19:51.372948  914177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:19:51.511716  914177 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-26 15:19:51.49938188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:19:51.512052  914177 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:19:51.512083  914177 cni.go:84] Creating CNI manager for ""
	I1026 15:19:51.512144  914177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:19:51.512178  914177 start.go:349] cluster config:
	{Name:default-k8s-diff-port-494684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:19:51.515686  914177 out.go:179] * Starting "default-k8s-diff-port-494684" primary control-plane node in "default-k8s-diff-port-494684" cluster
	I1026 15:19:51.519033  914177 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:19:51.522800  914177 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:19:51.526074  914177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:19:51.526141  914177 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:19:51.526154  914177 cache.go:58] Caching tarball of preloaded images
	I1026 15:19:51.526267  914177 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:19:51.526281  914177 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:19:51.526400  914177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/config.json ...
	I1026 15:19:51.526512  914177 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:19:51.568482  914177 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:19:51.568503  914177 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:19:51.568521  914177 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:19:51.568546  914177 start.go:360] acquireMachinesLock for default-k8s-diff-port-494684: {Name:mk0ed1a7373f921811143d09c40dcffb09852703 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:19:51.568600  914177 start.go:364] duration metric: took 35.242µs to acquireMachinesLock for "default-k8s-diff-port-494684"
	I1026 15:19:51.568620  914177 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:19:51.568625  914177 fix.go:54] fixHost starting: 
	I1026 15:19:51.569073  914177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:19:51.602419  914177 fix.go:112] recreateIfNeeded on default-k8s-diff-port-494684: state=Stopped err=<nil>
	W1026 15:19:51.602447  914177 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:19:50.748835  913646 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-810872:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.897810799s)
	I1026 15:19:50.748888  913646 kic.go:203] duration metric: took 4.897959256s to extract preloaded images to volume ...
	W1026 15:19:50.749040  913646 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 15:19:50.749170  913646 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:19:50.843898  913646 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-810872 --name newest-cni-810872 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-810872 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-810872 --network newest-cni-810872 --ip 192.168.85.2 --volume newest-cni-810872:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:19:51.245612  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Running}}
	I1026 15:19:51.301044  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:19:51.385611  913646 cli_runner.go:164] Run: docker exec newest-cni-810872 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:19:51.467754  913646 oci.go:144] the created container "newest-cni-810872" has a running status.
	I1026 15:19:51.467791  913646 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa...
	I1026 15:19:52.544770  913646 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:19:52.571699  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:19:52.592089  913646 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:19:52.592116  913646 kic_runner.go:114] Args: [docker exec --privileged newest-cni-810872 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:19:52.654417  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:19:52.684173  913646 machine.go:93] provisionDockerMachine start ...
	I1026 15:19:52.684281  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:52.710839  913646 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:52.713071  913646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33852 <nil> <nil>}
	I1026 15:19:52.713088  913646 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:19:52.897959  913646 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-810872
	
	I1026 15:19:52.898041  913646 ubuntu.go:182] provisioning hostname "newest-cni-810872"
	I1026 15:19:52.898157  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:52.929523  913646 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:52.929822  913646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33852 <nil> <nil>}
	I1026 15:19:52.929833  913646 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-810872 && echo "newest-cni-810872" | sudo tee /etc/hostname
	I1026 15:19:53.158974  913646 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-810872
	
	I1026 15:19:53.159124  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:53.192973  913646 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:53.193285  913646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33852 <nil> <nil>}
	I1026 15:19:53.193303  913646 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-810872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-810872/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-810872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:19:53.356994  913646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:19:53.357089  913646 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:19:53.357149  913646 ubuntu.go:190] setting up certificates
	I1026 15:19:53.357179  913646 provision.go:84] configureAuth start
	I1026 15:19:53.357271  913646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:19:53.373933  913646 provision.go:143] copyHostCerts
	I1026 15:19:53.374009  913646 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:19:53.374021  913646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:19:53.374105  913646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:19:53.374206  913646 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:19:53.374216  913646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:19:53.374244  913646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:19:53.374305  913646 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:19:53.374314  913646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:19:53.374338  913646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:19:53.374391  913646 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.newest-cni-810872 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-810872]
	I1026 15:19:53.625645  913646 provision.go:177] copyRemoteCerts
	I1026 15:19:53.625715  913646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:19:53.625757  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:53.643382  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:19:53.748894  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:19:53.766950  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:19:53.785269  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:19:53.803778  913646 provision.go:87] duration metric: took 446.558769ms to configureAuth
	I1026 15:19:53.803863  913646 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:19:53.804065  913646 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:19:53.804185  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:53.823364  913646 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:53.823677  913646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33852 <nil> <nil>}
	I1026 15:19:53.823698  913646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:19:54.124437  913646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:19:54.124461  913646 machine.go:96] duration metric: took 1.440256578s to provisionDockerMachine
	I1026 15:19:54.124471  913646 client.go:171] duration metric: took 9.05021191s to LocalClient.Create
	I1026 15:19:54.124485  913646 start.go:167] duration metric: took 9.050299181s to libmachine.API.Create "newest-cni-810872"
	I1026 15:19:54.124495  913646 start.go:293] postStartSetup for "newest-cni-810872" (driver="docker")
	I1026 15:19:54.124506  913646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:19:54.124576  913646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:19:54.124621  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:54.149055  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:19:54.253220  913646 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:19:54.256730  913646 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:19:54.256760  913646 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:19:54.256770  913646 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:19:54.256826  913646 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:19:54.256919  913646 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:19:54.257028  913646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:19:54.264788  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:19:54.289324  913646 start.go:296] duration metric: took 164.814036ms for postStartSetup
	I1026 15:19:54.289717  913646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:19:54.309733  913646 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/config.json ...
	I1026 15:19:54.310038  913646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:19:54.310094  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:54.326798  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:19:54.429969  913646 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:19:54.434745  913646 start.go:128] duration metric: took 9.366485275s to createHost
	I1026 15:19:54.434770  913646 start.go:83] releasing machines lock for "newest-cni-810872", held for 9.366659793s
	I1026 15:19:54.434846  913646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:19:54.453057  913646 ssh_runner.go:195] Run: cat /version.json
	I1026 15:19:54.453130  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:54.453411  913646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:19:54.453470  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:19:54.470445  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:19:54.486058  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:19:54.572269  913646 ssh_runner.go:195] Run: systemctl --version
	I1026 15:19:54.667782  913646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:19:54.703642  913646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:19:54.708581  913646 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:19:54.708673  913646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:19:54.736304  913646 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1026 15:19:54.736329  913646 start.go:495] detecting cgroup driver to use...
	I1026 15:19:54.736387  913646 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:19:54.736462  913646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:19:54.754554  913646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:19:54.767501  913646 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:19:54.767566  913646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:19:54.785681  913646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:19:54.803742  913646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:19:54.931713  913646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:19:55.075754  913646 docker.go:234] disabling docker service ...
	I1026 15:19:55.075878  913646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:19:55.097680  913646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:19:55.113023  913646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:19:55.240547  913646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:19:55.375525  913646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:19:55.389767  913646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:19:55.406188  913646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:19:55.406283  913646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.415588  913646 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:19:55.415702  913646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.424966  913646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.433983  913646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.442997  913646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:19:55.452295  913646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.462474  913646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.478479  913646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:55.491191  913646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:19:55.500270  913646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:19:55.509016  913646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:19:55.652888  913646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:19:55.809575  913646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:19:55.809656  913646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:19:55.813797  913646 start.go:563] Will wait 60s for crictl version
	I1026 15:19:55.813859  913646 ssh_runner.go:195] Run: which crictl
	I1026 15:19:55.817512  913646 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:19:55.853065  913646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:19:55.853152  913646 ssh_runner.go:195] Run: crio --version
	I1026 15:19:55.906913  913646 ssh_runner.go:195] Run: crio --version
	I1026 15:19:55.944244  913646 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:19:55.946884  913646 cli_runner.go:164] Run: docker network inspect newest-cni-810872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:19:55.986610  913646 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:19:55.991073  913646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:19:56.003718  913646 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:19:51.606216  914177 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-494684" ...
	I1026 15:19:51.606299  914177 cli_runner.go:164] Run: docker start default-k8s-diff-port-494684
	I1026 15:19:52.104413  914177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:19:52.178667  914177 kic.go:430] container "default-k8s-diff-port-494684" state is running.
	I1026 15:19:52.179064  914177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-494684
	I1026 15:19:52.245550  914177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/config.json ...
	I1026 15:19:52.246147  914177 machine.go:93] provisionDockerMachine start ...
	I1026 15:19:52.246218  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:52.289288  914177 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:52.289611  914177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:19:52.289621  914177 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:19:52.290315  914177 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:19:55.448747  914177 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-494684
	
	I1026 15:19:55.448828  914177 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-494684"
	I1026 15:19:55.448948  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:55.476606  914177 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:55.477105  914177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:19:55.477137  914177 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-494684 && echo "default-k8s-diff-port-494684" | sudo tee /etc/hostname
	I1026 15:19:55.647669  914177 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-494684
	
	I1026 15:19:55.647826  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:55.670545  914177 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:55.670856  914177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:19:55.670874  914177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-494684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-494684/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-494684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:19:55.829306  914177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:19:55.829333  914177 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:19:55.829359  914177 ubuntu.go:190] setting up certificates
	I1026 15:19:55.829369  914177 provision.go:84] configureAuth start
	I1026 15:19:55.829433  914177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-494684
	I1026 15:19:55.853881  914177 provision.go:143] copyHostCerts
	I1026 15:19:55.853940  914177 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:19:55.853958  914177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:19:55.854035  914177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:19:55.854135  914177 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:19:55.854140  914177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:19:55.854171  914177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:19:55.854222  914177 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:19:55.854226  914177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:19:55.854248  914177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:19:55.854291  914177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-494684 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-494684 localhost minikube]
	I1026 15:19:56.007401  913646 kubeadm.go:883] updating cluster {Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:19:56.007584  913646 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:19:56.007688  913646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:19:56.050378  913646 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:19:56.050399  913646 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:19:56.050455  913646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:19:56.092832  913646 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:19:56.092926  913646 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:19:56.092949  913646 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:19:56.093094  913646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-810872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:19:56.093223  913646 ssh_runner.go:195] Run: crio config
	I1026 15:19:56.172507  913646 cni.go:84] Creating CNI manager for ""
	I1026 15:19:56.172579  913646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:19:56.172615  913646 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:19:56.172663  913646 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-810872 NodeName:newest-cni-810872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:19:56.172847  913646 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-810872"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:19:56.172967  913646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:19:56.181252  913646 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:19:56.181367  913646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:19:56.189204  913646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:19:56.208452  913646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:19:56.222820  913646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 15:19:56.237713  913646 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:19:56.241888  913646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:19:56.252643  913646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:19:56.402322  913646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:19:56.430367  913646 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872 for IP: 192.168.85.2
	I1026 15:19:56.430385  913646 certs.go:195] generating shared ca certs ...
	I1026 15:19:56.430400  913646 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:56.430529  913646 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:19:56.430572  913646 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:19:56.430579  913646 certs.go:257] generating profile certs ...
	I1026 15:19:56.430642  913646 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.key
	I1026 15:19:56.430661  913646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.crt with IP's: []
	I1026 15:19:56.574080  913646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.crt ...
	I1026 15:19:56.574154  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.crt: {Name:mk6c8ac2df05d9b2a2c3de373efaf7225e194869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:56.574362  913646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.key ...
	I1026 15:19:56.574398  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.key: {Name:mk5a86c67b87ff48ed51bd5a27258881fb2fb57e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:56.574559  913646 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key.4ba50940
	I1026 15:19:56.574601  913646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt.4ba50940 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 15:19:56.718433  913646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt.4ba50940 ...
	I1026 15:19:56.718507  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt.4ba50940: {Name:mk80f1cd55cb4be354cb09edbfce1f574dd3eb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:56.718701  913646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key.4ba50940 ...
	I1026 15:19:56.718741  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key.4ba50940: {Name:mke73cc40bbb64932f4825d4616b06490355a253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:56.718845  913646 certs.go:382] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt.4ba50940 -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt
	I1026 15:19:56.718967  913646 certs.go:386] copying /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key.4ba50940 -> /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key
	I1026 15:19:56.719076  913646 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key
	I1026 15:19:56.719126  913646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.crt with IP's: []
	I1026 15:19:57.207241  913646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.crt ...
	I1026 15:19:57.207316  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.crt: {Name:mkde585abdd8c2037fc5bece66eabcb3098c1edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:57.207519  913646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key ...
	I1026 15:19:57.207566  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key: {Name:mkf0a04a26b493a080ccc2442fdd2ab140744933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:57.207791  913646 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:19:57.207874  913646 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:19:57.207900  913646 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:19:57.207951  913646 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:19:57.207995  913646 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:19:57.208047  913646 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:19:57.208112  913646 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:19:57.208776  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:19:57.229690  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:19:57.268869  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:19:57.291171  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:19:57.347508  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:19:57.367895  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:19:57.387258  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:19:57.419096  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:19:57.441572  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:19:57.461860  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:19:57.483665  913646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:19:57.503962  913646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:19:57.518199  913646 ssh_runner.go:195] Run: openssl version
	I1026 15:19:57.524917  913646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:19:57.534077  913646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:19:57.539212  913646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:19:57.539276  913646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:19:57.587823  913646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:19:57.603667  913646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:19:57.614103  913646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:19:57.618995  913646 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:19:57.619065  913646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:19:57.665163  913646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:19:57.674232  913646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:19:57.682961  913646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:19:57.686949  913646 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:19:57.687015  913646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:19:57.734979  913646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:19:57.750692  913646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:19:57.760118  913646 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:19:57.760168  913646 kubeadm.go:400] StartCluster: {Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:19:57.760245  913646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:19:57.760307  913646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:19:57.813611  913646 cri.go:89] found id: ""
	I1026 15:19:57.813689  913646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:19:57.825471  913646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:19:57.835893  913646 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:19:57.835954  913646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:19:57.846099  913646 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:19:57.846170  913646 kubeadm.go:157] found existing configuration files:
	
	I1026 15:19:57.846253  913646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:19:57.855526  913646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:19:57.855594  913646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:19:57.864357  913646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:19:57.882700  913646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:19:57.882773  913646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:19:57.892045  913646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:19:57.903589  913646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:19:57.903714  913646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:19:57.912590  913646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:19:57.923624  913646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:19:57.923739  913646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:19:57.934293  913646 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:19:57.977139  913646 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:19:57.977486  913646 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:19:58.042650  913646 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:19:58.042776  913646 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1026 15:19:58.042859  913646 kubeadm.go:318] OS: Linux
	I1026 15:19:58.042938  913646 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:19:58.043016  913646 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1026 15:19:58.043095  913646 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:19:58.043182  913646 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:19:58.043264  913646 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:19:58.043347  913646 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:19:58.043417  913646 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:19:58.043503  913646 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:19:58.043570  913646 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1026 15:19:58.153173  913646 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:19:58.153346  913646 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:19:58.153472  913646 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:19:58.170787  913646 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:19:56.798848  914177 provision.go:177] copyRemoteCerts
	I1026 15:19:56.798946  914177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:19:56.799007  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:56.822144  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:19:56.929545  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:19:56.950534  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 15:19:56.971174  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:19:56.991889  914177 provision.go:87] duration metric: took 1.16249707s to configureAuth
	I1026 15:19:56.991965  914177 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:19:56.992178  914177 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:19:56.992333  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:57.012191  914177 main.go:141] libmachine: Using SSH client type: native
	I1026 15:19:57.012502  914177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:19:57.012516  914177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:19:57.389203  914177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:19:57.389225  914177 machine.go:96] duration metric: took 5.143062613s to provisionDockerMachine
	I1026 15:19:57.389236  914177 start.go:293] postStartSetup for "default-k8s-diff-port-494684" (driver="docker")
	I1026 15:19:57.389248  914177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:19:57.389328  914177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:19:57.389370  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:57.419146  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:19:57.530812  914177 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:19:57.536378  914177 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:19:57.536410  914177 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:19:57.536422  914177 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:19:57.536478  914177 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:19:57.536561  914177 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:19:57.536683  914177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:19:57.546704  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:19:57.567530  914177 start.go:296] duration metric: took 178.277273ms for postStartSetup
	I1026 15:19:57.567627  914177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:19:57.567683  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:57.586998  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:19:57.698669  914177 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:19:57.706047  914177 fix.go:56] duration metric: took 6.137412138s for fixHost
	I1026 15:19:57.706084  914177 start.go:83] releasing machines lock for "default-k8s-diff-port-494684", held for 6.137461837s
	I1026 15:19:57.706154  914177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-494684
	I1026 15:19:57.726464  914177 ssh_runner.go:195] Run: cat /version.json
	I1026 15:19:57.726519  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:57.726774  914177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:19:57.726842  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:19:57.756350  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:19:57.764920  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:19:57.872593  914177 ssh_runner.go:195] Run: systemctl --version
	I1026 15:19:57.975377  914177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:19:58.035678  914177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:19:58.042363  914177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:19:58.043617  914177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:19:58.052971  914177 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:19:58.053040  914177 start.go:495] detecting cgroup driver to use...
	I1026 15:19:58.053088  914177 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:19:58.053170  914177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:19:58.078850  914177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:19:58.094472  914177 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:19:58.094581  914177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:19:58.112160  914177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:19:58.133560  914177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:19:58.282796  914177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:19:58.437158  914177 docker.go:234] disabling docker service ...
	I1026 15:19:58.437315  914177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:19:58.453267  914177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:19:58.467089  914177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:19:58.651673  914177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:19:58.828306  914177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:19:58.849912  914177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:19:58.876064  914177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:19:58.876151  914177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.886764  914177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:19:58.886835  914177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.899835  914177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.909791  914177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.918503  914177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:19:58.926608  914177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.936386  914177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.945374  914177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:19:58.955020  914177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:19:58.963309  914177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:19:58.972034  914177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:19:59.116807  914177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:19:59.265913  914177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:19:59.265998  914177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:19:59.270375  914177 start.go:563] Will wait 60s for crictl version
	I1026 15:19:59.270449  914177 ssh_runner.go:195] Run: which crictl
	I1026 15:19:59.274346  914177 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:19:59.315052  914177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:19:59.315181  914177 ssh_runner.go:195] Run: crio --version
	I1026 15:19:59.349310  914177 ssh_runner.go:195] Run: crio --version
	I1026 15:19:59.386841  914177 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:19:58.176800  913646 out.go:252]   - Generating certificates and keys ...
	I1026 15:19:58.176918  913646 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:19:58.176994  913646 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:19:58.419210  913646 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:19:59.060939  913646 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:19:59.707262  913646 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:19:59.389824  914177 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-494684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:19:59.406320  914177 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:19:59.410540  914177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:19:59.420924  914177 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-494684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:19:59.421050  914177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:19:59.421108  914177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:19:59.459244  914177 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:19:59.459263  914177 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:19:59.459319  914177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:19:59.486208  914177 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:19:59.486271  914177 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:19:59.486292  914177 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1026 15:19:59.486418  914177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-494684 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:19:59.486537  914177 ssh_runner.go:195] Run: crio config
	I1026 15:19:59.567310  914177 cni.go:84] Creating CNI manager for ""
	I1026 15:19:59.567347  914177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:19:59.567362  914177 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:19:59.567385  914177 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-494684 NodeName:default-k8s-diff-port-494684 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:19:59.567544  914177 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-494684"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:19:59.567623  914177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:19:59.575851  914177 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:19:59.575933  914177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:19:59.583787  914177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1026 15:19:59.600146  914177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:19:59.620007  914177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1026 15:19:59.634413  914177 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:19:59.638372  914177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:19:59.648640  914177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:19:59.810788  914177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:19:59.830789  914177 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684 for IP: 192.168.76.2
	I1026 15:19:59.830816  914177 certs.go:195] generating shared ca certs ...
	I1026 15:19:59.830832  914177 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:19:59.830979  914177 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:19:59.831071  914177 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:19:59.831111  914177 certs.go:257] generating profile certs ...
	I1026 15:19:59.831248  914177 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.key
	I1026 15:19:59.831375  914177 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/apiserver.key.e325c763
	I1026 15:19:59.831467  914177 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/proxy-client.key
	I1026 15:19:59.831634  914177 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:19:59.831692  914177 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:19:59.831716  914177 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:19:59.831773  914177 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:19:59.831838  914177 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:19:59.831883  914177 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:19:59.831961  914177 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:19:59.832653  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:19:59.862838  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:19:59.912310  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:19:59.947286  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:19:59.967236  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 15:19:59.985336  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:20:00.025219  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:20:00.059049  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:20:00.138076  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:20:00.201982  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:20:00.301717  914177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:20:00.399002  914177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:20:00.430924  914177 ssh_runner.go:195] Run: openssl version
	I1026 15:20:00.442469  914177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:20:00.455706  914177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:20:00.461009  914177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:20:00.461098  914177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:20:00.539363  914177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:20:00.553750  914177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:20:00.569097  914177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:20:00.576357  914177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:20:00.576516  914177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:20:00.624663  914177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:20:00.641414  914177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:20:00.652199  914177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:00.660963  914177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:00.661153  914177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:00.742947  914177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:20:00.759999  914177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:20:00.769868  914177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:20:00.865582  914177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:20:00.948227  914177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:20:01.076342  914177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:20:01.259897  914177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:20:01.437378  914177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:20:01.569553  914177 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-494684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-494684 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:01.569702  914177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:20:01.569817  914177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:20:01.678337  914177 cri.go:89] found id: "241c767113e68c1f22448bdbebeb0a4e52ed25a88c70b543c9b9d67191107fe6"
	I1026 15:20:01.678406  914177 cri.go:89] found id: "7f98f8d7b370c0262b7b8305334add4092bc7bb084d8f736c2dfb8914762723b"
	I1026 15:20:01.678426  914177 cri.go:89] found id: "726d76ef979662bc62bda3f5d764d66efbaf72659b362834d790c61451facabd"
	I1026 15:20:01.678456  914177 cri.go:89] found id: "76f8254b92018f8ae8e793d8373b480a5d5fd6589077c7f793456dfa1a8a71cc"
	I1026 15:20:01.678484  914177 cri.go:89] found id: ""
	I1026 15:20:01.678579  914177 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:20:01.710847  914177 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:01Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:20:01.711033  914177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:20:01.739338  914177 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:20:01.739403  914177 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:20:01.739486  914177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:20:01.757227  914177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:20:01.757767  914177 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-494684" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:01.757930  914177 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-494684" cluster setting kubeconfig missing "default-k8s-diff-port-494684" context setting]
	I1026 15:20:01.758248  914177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:01.759614  914177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:20:01.805228  914177 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:20:01.805307  914177 kubeadm.go:601] duration metric: took 65.88449ms to restartPrimaryControlPlane
	I1026 15:20:01.805334  914177 kubeadm.go:402] duration metric: took 235.793383ms to StartCluster
	I1026 15:20:01.805377  914177 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:01.805461  914177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:01.806104  914177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:01.806385  914177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:20:01.806712  914177 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:01.806779  914177 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:20:01.806959  914177 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-494684"
	I1026 15:20:01.807011  914177 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-494684"
	W1026 15:20:01.807032  914177 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:20:01.806962  914177 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-494684"
	I1026 15:20:01.807095  914177 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-494684"
	W1026 15:20:01.807105  914177 addons.go:247] addon dashboard should already be in state true
	I1026 15:20:01.807107  914177 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:20:01.806974  914177 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-494684"
	I1026 15:20:01.807165  914177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-494684"
	I1026 15:20:01.807504  914177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:20:01.807855  914177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:20:01.807133  914177 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:20:01.808951  914177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:20:01.811453  914177 out.go:179] * Verifying Kubernetes components...
	I1026 15:20:01.816845  914177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:01.850083  914177 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:20:01.853083  914177 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:01.853109  914177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:20:01.853183  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:20:01.872800  914177 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:20:01.874438  914177 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-494684"
	W1026 15:20:01.874462  914177 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:20:01.874495  914177 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:20:01.874958  914177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:20:01.878889  914177 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:20:00.105260  913646 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:20:01.208020  913646 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:20:01.208505  913646 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-810872] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:20:03.333051  913646 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:20:03.333198  913646 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-810872] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:20:03.729942  913646 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:20:04.409060  913646 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:20:01.881808  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:20:01.881847  914177 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:20:01.881924  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:20:01.910573  914177 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:01.910594  914177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:20:01.910661  914177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:20:01.932835  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:20:01.955338  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:20:01.962447  914177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:20:02.307321  914177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:02.329656  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:20:02.329733  914177 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:20:02.390354  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:20:02.390428  914177 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:20:02.398802  914177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:20:02.493606  914177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:02.506315  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:20:02.506397  914177 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:20:02.584373  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:20:02.584448  914177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:20:02.713505  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:20:02.713619  914177 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:20:02.817375  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:20:02.817456  914177 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:20:02.903426  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:20:02.903508  914177 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:20:02.952136  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:20:02.952212  914177 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:20:03.007947  914177 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:20:03.007977  914177 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:20:03.047659  914177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:20:05.025069  913646 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:20:05.025144  913646 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:20:05.431420  913646 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:20:06.053491  913646 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:20:06.509056  913646 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:20:07.041092  913646 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:20:07.377068  913646 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:20:07.377169  913646 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:20:07.384650  913646 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:20:07.388070  913646 out.go:252]   - Booting up control plane ...
	I1026 15:20:07.388181  913646 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:20:07.388263  913646 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:20:07.388334  913646 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:20:07.418713  913646 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:20:07.418826  913646 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:20:07.433159  913646 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:20:07.433264  913646 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:20:07.433306  913646 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:20:07.645940  913646 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:20:07.646064  913646 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:20:09.647584  913646 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001734226s
	I1026 15:20:09.651415  913646 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:20:09.651516  913646 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 15:20:09.651609  913646 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:20:09.651692  913646 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:20:10.713198  914177 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.314256889s)
	I1026 15:20:10.713244  914177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:20:10.713547  914177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.219864442s)
	I1026 15:20:10.714748  914177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.407346313s)
	I1026 15:20:10.821072  914177 node_ready.go:49] node "default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:10.821101  914177 node_ready.go:38] duration metric: took 107.840905ms for node "default-k8s-diff-port-494684" to be "Ready" ...
	I1026 15:20:10.821116  914177 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:20:10.821182  914177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:20:10.966431  914177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.918730673s)
	I1026 15:20:10.966658  914177 api_server.go:72] duration metric: took 9.160219426s to wait for apiserver process to appear ...
	I1026 15:20:10.966692  914177 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:20:10.966736  914177 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1026 15:20:10.969530  914177 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-494684 addons enable metrics-server
	
	I1026 15:20:10.972433  914177 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1026 15:20:10.975550  914177 addons.go:514] duration metric: took 9.168760164s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 15:20:10.997498  914177 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:10.997533  914177 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:11.467351  914177 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1026 15:20:11.498933  914177 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1026 15:20:11.507114  914177 api_server.go:141] control plane version: v1.34.1
	I1026 15:20:11.507140  914177 api_server.go:131] duration metric: took 540.416642ms to wait for apiserver health ...
	I1026 15:20:11.507149  914177 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:20:11.522090  914177 system_pods.go:59] 8 kube-system pods found
	I1026 15:20:11.522174  914177 system_pods.go:61] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:20:11.522200  914177 system_pods.go:61] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:20:11.522243  914177 system_pods.go:61] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:20:11.522267  914177 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:20:11.522300  914177 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:20:11.522318  914177 system_pods.go:61] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:20:11.522349  914177 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:20:11.522374  914177 system_pods.go:61] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:20:11.522396  914177 system_pods.go:74] duration metric: took 15.240619ms to wait for pod list to return data ...
	I1026 15:20:11.522418  914177 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:20:11.538020  914177 default_sa.go:45] found service account: "default"
	I1026 15:20:11.538103  914177 default_sa.go:55] duration metric: took 15.663893ms for default service account to be created ...
	I1026 15:20:11.538131  914177 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:20:11.548137  914177 system_pods.go:86] 8 kube-system pods found
	I1026 15:20:11.548218  914177 system_pods.go:89] "coredns-66bc5c9577-zm8vb" [94c0c5a6-92d9-4c12-ac44-1514a81158fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:20:11.548242  914177 system_pods.go:89] "etcd-default-k8s-diff-port-494684" [db182ec9-b2b0-4204-89d4-14af164e3091] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:20:11.548263  914177 system_pods.go:89] "kindnet-bfc62" [044af459-c8ff-41f0-976f-0d52643cf9fb] Running
	I1026 15:20:11.548300  914177 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-494684" [6e6a2125-4fc7-4740-b64a-66cfbbbabbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:20:11.548327  914177 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-494684" [3dca2a80-df22-4074-b68e-87443f6692d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:20:11.548347  914177 system_pods.go:89] "kube-proxy-nbcd6" [da5e9adf-608b-4892-a105-a03c1dea6660] Running
	I1026 15:20:11.548369  914177 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-494684" [1bf609f2-d612-480a-98b8-044a1b75e97b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:20:11.548441  914177 system_pods.go:89] "storage-provisioner" [76a854e4-16a9-4614-a574-43c882aa10b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:20:11.548466  914177 system_pods.go:126] duration metric: took 10.318062ms to wait for k8s-apps to be running ...
	I1026 15:20:11.548489  914177 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:20:11.548578  914177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:20:11.574228  914177 system_svc.go:56] duration metric: took 25.729251ms WaitForService to wait for kubelet
	I1026 15:20:11.574258  914177 kubeadm.go:586] duration metric: took 9.767820107s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:20:11.574276  914177 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:20:11.591467  914177 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:20:11.591504  914177 node_conditions.go:123] node cpu capacity is 2
	I1026 15:20:11.591516  914177 node_conditions.go:105] duration metric: took 17.233732ms to run NodePressure ...
	I1026 15:20:11.591529  914177 start.go:241] waiting for startup goroutines ...
	I1026 15:20:11.591537  914177 start.go:246] waiting for cluster config update ...
	I1026 15:20:11.591549  914177 start.go:255] writing updated cluster config ...
	I1026 15:20:11.591854  914177 ssh_runner.go:195] Run: rm -f paused
	I1026 15:20:11.596589  914177 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:20:11.608689  914177 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:20:13.634428  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:16.122967  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:18.667400  913646 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 9.014821487s
	W1026 15:20:18.621827  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:21.121428  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:20.487880  913646 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 10.836467492s
	I1026 15:20:21.653893  913646 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.002403202s
	I1026 15:20:21.674007  913646 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:20:21.705784  913646 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:20:21.725349  913646 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:20:21.725858  913646 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-810872 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:20:21.746228  913646 kubeadm.go:318] [bootstrap-token] Using token: bcwfat.o629puaolp171pwv
	I1026 15:20:21.750491  913646 out.go:252]   - Configuring RBAC rules ...
	I1026 15:20:21.750633  913646 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:20:21.765278  913646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:20:21.780763  913646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:20:21.785764  913646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:20:21.794613  913646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:20:21.807643  913646 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:20:22.066108  913646 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:20:22.592496  913646 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:20:23.062434  913646 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:20:23.064257  913646 kubeadm.go:318] 
	I1026 15:20:23.064342  913646 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:20:23.064353  913646 kubeadm.go:318] 
	I1026 15:20:23.064434  913646 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:20:23.064446  913646 kubeadm.go:318] 
	I1026 15:20:23.064473  913646 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:20:23.065077  913646 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:20:23.065143  913646 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:20:23.065153  913646 kubeadm.go:318] 
	I1026 15:20:23.065210  913646 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:20:23.065220  913646 kubeadm.go:318] 
	I1026 15:20:23.065270  913646 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:20:23.065279  913646 kubeadm.go:318] 
	I1026 15:20:23.065334  913646 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:20:23.065416  913646 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:20:23.065495  913646 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:20:23.065505  913646 kubeadm.go:318] 
	I1026 15:20:23.065880  913646 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:20:23.065984  913646 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:20:23.065993  913646 kubeadm.go:318] 
	I1026 15:20:23.066333  913646 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token bcwfat.o629puaolp171pwv \
	I1026 15:20:23.066455  913646 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 \
	I1026 15:20:23.066695  913646 kubeadm.go:318] 	--control-plane 
	I1026 15:20:23.066711  913646 kubeadm.go:318] 
	I1026 15:20:23.066973  913646 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:20:23.066987  913646 kubeadm.go:318] 
	I1026 15:20:23.067340  913646 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token bcwfat.o629puaolp171pwv \
	I1026 15:20:23.067669  913646 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:54f11feaa4c6f3a3028136d6bab6e4ce2ea6c4e27502c2885062873bf46bd6e7 
	I1026 15:20:23.078883  913646 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1026 15:20:23.079124  913646 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1026 15:20:23.079237  913646 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:20:23.079258  913646 cni.go:84] Creating CNI manager for ""
	I1026 15:20:23.079266  913646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:23.082399  913646 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:20:23.086357  913646 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:20:23.094098  913646 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:20:23.094124  913646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:20:23.146284  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:20:23.587279  913646 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:20:23.587432  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:23.587523  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-810872 minikube.k8s.io/updated_at=2025_10_26T15_20_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=newest-cni-810872 minikube.k8s.io/primary=true
	I1026 15:20:23.892780  913646 ops.go:34] apiserver oom_adj: -16
	I1026 15:20:23.892900  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:24.393381  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1026 15:20:23.619817  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:26.116482  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:24.893006  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:25.393877  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:25.893490  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:26.393507  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:26.893966  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:27.393020  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:27.893776  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:28.393948  913646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:20:28.552637  913646 kubeadm.go:1113] duration metric: took 4.965253011s to wait for elevateKubeSystemPrivileges
	I1026 15:20:28.552670  913646 kubeadm.go:402] duration metric: took 30.792504513s to StartCluster
	I1026 15:20:28.552687  913646 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:28.552765  913646 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:28.553743  913646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:28.553991  913646 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:20:28.554105  913646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:20:28.554381  913646 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:28.554355  913646 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:20:28.554482  913646 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-810872"
	I1026 15:20:28.554496  913646 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-810872"
	I1026 15:20:28.554521  913646 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:28.554561  913646 addons.go:69] Setting default-storageclass=true in profile "newest-cni-810872"
	I1026 15:20:28.554577  913646 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-810872"
	I1026 15:20:28.554932  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:28.555007  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:28.557270  913646 out.go:179] * Verifying Kubernetes components...
	I1026 15:20:28.560529  913646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:28.587069  913646 addons.go:238] Setting addon default-storageclass=true in "newest-cni-810872"
	I1026 15:20:28.587116  913646 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:28.587528  913646 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:28.605133  913646 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:20:28.608279  913646 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:28.608308  913646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:20:28.608384  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:28.637812  913646 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:28.637834  913646 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:20:28.637910  913646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:28.645021  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:28.668886  913646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:28.878771  913646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:20:28.897781  913646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:20:28.989974  913646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:29.024004  913646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:29.461715  913646 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 15:20:29.463718  913646 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:20:29.463772  913646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:20:29.720406  913646 api_server.go:72] duration metric: took 1.166378058s to wait for apiserver process to appear ...
	I1026 15:20:29.720483  913646 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:20:29.720528  913646 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:29.723772  913646 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 15:20:29.727484  913646 addons.go:514] duration metric: took 1.17311647s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 15:20:29.733067  913646 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:20:29.734227  913646 api_server.go:141] control plane version: v1.34.1
	I1026 15:20:29.734293  913646 api_server.go:131] duration metric: took 13.77771ms to wait for apiserver health ...
	I1026 15:20:29.734320  913646 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:20:29.738743  913646 system_pods.go:59] 8 kube-system pods found
	I1026 15:20:29.738841  913646 system_pods.go:61] "coredns-66bc5c9577-b49d6" [0cc1ad2e-be8a-43fb-baed-3d411550f34c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:20:29.738866  913646 system_pods.go:61] "etcd-newest-cni-810872" [784475d8-6ee3-45c9-a0cc-55d18ee84177] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:20:29.738901  913646 system_pods.go:61] "kindnet-ggnvk" [52fc9b6a-4117-47b6-8fd4-eff144861784] Running
	I1026 15:20:29.738927  913646 system_pods.go:61] "kube-apiserver-newest-cni-810872" [cdd8bae8-4574-497b-a540-57831768a16b] Running
	I1026 15:20:29.738947  913646 system_pods.go:61] "kube-controller-manager-newest-cni-810872" [96ea627b-92e4-448c-8621-2129603a8ce3] Running
	I1026 15:20:29.738981  913646 system_pods.go:61] "kube-proxy-7rsbv" [d20c61cd-9231-44c6-9861-45cb1d45c060] Running
	I1026 15:20:29.739002  913646 system_pods.go:61] "kube-scheduler-newest-cni-810872" [17a3ef6c-201f-4fdb-b45f-6e3b2614a3fd] Running
	I1026 15:20:29.739021  913646 system_pods.go:61] "storage-provisioner" [6a816eb1-59c8-4ed0-9087-4fb271f4608b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:20:29.739041  913646 system_pods.go:74] duration metric: took 4.703551ms to wait for pod list to return data ...
	I1026 15:20:29.739081  913646 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:20:29.743940  913646 default_sa.go:45] found service account: "default"
	I1026 15:20:29.744018  913646 default_sa.go:55] duration metric: took 4.916944ms for default service account to be created ...
	I1026 15:20:29.744045  913646 kubeadm.go:586] duration metric: took 1.190020024s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:20:29.744090  913646 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:20:29.747217  913646 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:20:29.747299  913646 node_conditions.go:123] node cpu capacity is 2
	I1026 15:20:29.747327  913646 node_conditions.go:105] duration metric: took 3.213844ms to run NodePressure ...
	I1026 15:20:29.747352  913646 start.go:241] waiting for startup goroutines ...
	I1026 15:20:29.966232  913646 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-810872" context rescaled to 1 replicas
	I1026 15:20:29.966332  913646 start.go:246] waiting for cluster config update ...
	I1026 15:20:29.966360  913646 start.go:255] writing updated cluster config ...
	I1026 15:20:29.966733  913646 ssh_runner.go:195] Run: rm -f paused
	I1026 15:20:30.042217  913646 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:20:30.047291  913646 out.go:179] * Done! kubectl is now configured to use "newest-cni-810872" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.332995978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.338318269Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-7rsbv/POD" id=b3092fc4-07fa-42cb-9f3c-a95b4bfa572c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.338387611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.341939133Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b3092fc4-07fa-42cb-9f3c-a95b4bfa572c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.343020737Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=53912598-c67f-436e-aaa5-a3d1a9c655db name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.355943299Z" level=info msg="Ran pod sandbox b11011ca63087197a5e3a7dbd4350a10c0eddb872b570570d8ad65d479332bd3 with infra container: kube-system/kube-proxy-7rsbv/POD" id=b3092fc4-07fa-42cb-9f3c-a95b4bfa572c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.357566898Z" level=info msg="Ran pod sandbox 10fc9ca0fea66ea3803fa66fdd1a1a211681ce6a3e988c37c4e39ca663460e5e with infra container: kube-system/kindnet-ggnvk/POD" id=53912598-c67f-436e-aaa5-a3d1a9c655db name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.358702066Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7df6f167-3aa3-475f-a090-a0c2d889f676 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.361206676Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=aeacf5a3-ab14-4313-8785-0a89c4c0977a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.361647419Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=edfea28f-9667-4038-b6ef-16f152efd694 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.36310108Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=679768c1-ac9e-4e76-8796-e7f0e3c73dfd name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.368401429Z" level=info msg="Creating container: kube-system/kube-proxy-7rsbv/kube-proxy" id=e211a6c3-5c2f-4cc9-909a-b1babe7adf60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.368680153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.371866665Z" level=info msg="Creating container: kube-system/kindnet-ggnvk/kindnet-cni" id=01496ea2-ae86-4d9d-8005-14da1ae13a95 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.371965678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.379216301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.379928506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.381441532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.382459093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.415836502Z" level=info msg="Created container 974380d2ceef74d47b5b44b2b917543fbbe0648895af177778e436c60f7fd7f7: kube-system/kindnet-ggnvk/kindnet-cni" id=01496ea2-ae86-4d9d-8005-14da1ae13a95 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.41743885Z" level=info msg="Starting container: 974380d2ceef74d47b5b44b2b917543fbbe0648895af177778e436c60f7fd7f7" id=4dbff494-7f57-4d78-a769-c6bb50e1e8b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.423446587Z" level=info msg="Started container" PID=1407 containerID=974380d2ceef74d47b5b44b2b917543fbbe0648895af177778e436c60f7fd7f7 description=kube-system/kindnet-ggnvk/kindnet-cni id=4dbff494-7f57-4d78-a769-c6bb50e1e8b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10fc9ca0fea66ea3803fa66fdd1a1a211681ce6a3e988c37c4e39ca663460e5e
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.435037295Z" level=info msg="Created container 9701507eb90525b222d9c5a5a2440953b8e8753d167ccd3808eb1581c94234fd: kube-system/kube-proxy-7rsbv/kube-proxy" id=e211a6c3-5c2f-4cc9-909a-b1babe7adf60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.439715492Z" level=info msg="Starting container: 9701507eb90525b222d9c5a5a2440953b8e8753d167ccd3808eb1581c94234fd" id=ef1c6e8b-3df0-4f62-a6b0-0c27ce13eaeb name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:28 newest-cni-810872 crio[837]: time="2025-10-26T15:20:28.446098438Z" level=info msg="Started container" PID=1409 containerID=9701507eb90525b222d9c5a5a2440953b8e8753d167ccd3808eb1581c94234fd description=kube-system/kube-proxy-7rsbv/kube-proxy id=ef1c6e8b-3df0-4f62-a6b0-0c27ce13eaeb name=/runtime.v1.RuntimeService/StartContainer sandboxID=b11011ca63087197a5e3a7dbd4350a10c0eddb872b570570d8ad65d479332bd3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9701507eb9052       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   b11011ca63087       kube-proxy-7rsbv                            kube-system
	974380d2ceef7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   10fc9ca0fea66       kindnet-ggnvk                               kube-system
	2fc108129f2e3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago      Running             etcd                      0                   6b4bbde418c74       etcd-newest-cni-810872                      kube-system
	1fa65dd303223       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago      Running             kube-controller-manager   0                   2e8bb6202b2ec       kube-controller-manager-newest-cni-810872   kube-system
	00d24fdac2f69       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago      Running             kube-scheduler            0                   efebfa80eb64c       kube-scheduler-newest-cni-810872            kube-system
	1574ea7824211       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago      Running             kube-apiserver            0                   e92f4055c2cfc       kube-apiserver-newest-cni-810872            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-810872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-810872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=newest-cni-810872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_20_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:20:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-810872
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:20:23 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:20:23 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:20:23 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 15:20:23 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-810872
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cb876d54-b19f-49ca-b5c7-700f084fb6f3
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-810872                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-ggnvk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-810872             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-810872    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-7rsbv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-810872             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Warning  CgroupV1                 22s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x8 over 22s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-810872 event: Registered Node newest-cni-810872 in Controller
	
	
	==> dmesg <==
	[ +18.091685] overlayfs: idmapped layers are currently not supported
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	[Oct26 15:20] overlayfs: idmapped layers are currently not supported
	[  +9.178014] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2fc108129f2e3caddaf2e1089e3bc34d522d58f4ece109fb3b13b139d69ec59f] <==
	{"level":"warn","ts":"2025-10-26T15:20:15.381628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.427095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.565057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.565822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.597553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.624179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.652033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.689908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.730403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.778035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.814997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.841571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.879845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.936378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:15.981530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.021501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.141905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.185191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.251028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.297101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.395848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.431732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.497286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.548305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:16.792561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39146","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:20:31 up  5:03,  0 user,  load average: 5.18, 4.01, 3.27
	Linux newest-cni-810872 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [974380d2ceef74d47b5b44b2b917543fbbe0648895af177778e436c60f7fd7f7] <==
	I1026 15:20:28.537460       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:20:28.635294       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:20:28.635457       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:20:28.635470       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:20:28.635485       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:20:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:20:28.828049       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:20:28.828068       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:20:28.828076       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:20:28.828196       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1574ea78242113ce80bfa05824a220dd0199bd8a30157fde72dcdd163e892123] <==
	I1026 15:20:19.118576       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:20:19.118637       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:20:19.118648       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:20:19.118655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:20:19.118660       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:20:19.121588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:20:19.126630       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:20:19.131128       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:20:19.490521       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:20:19.526328       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:20:19.526350       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:20:21.001732       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:20:21.070491       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:20:21.171029       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:20:21.184004       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 15:20:21.185493       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:20:21.196956       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:20:22.061228       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:20:22.554950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:20:22.590013       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:20:22.609115       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:20:27.811401       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:20:27.818551       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:20:27.912393       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:27.958811       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [1fa65dd30322393b9d3b52f3db6ba122d919f6285c22057f0dde24f5d8dd9f76] <==
	I1026 15:20:27.170309       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:20:27.179478       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:20:27.202040       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:27.205039       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:20:27.205158       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:20:27.205445       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:20:27.205868       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-810872"
	I1026 15:20:27.206000       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:20:27.205934       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:20:27.205183       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:20:27.205175       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:20:27.208775       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:27.212210       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:20:27.212289       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:20:27.212188       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:20:27.212451       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:20:27.212506       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:20:27.212536       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:20:27.212564       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:20:27.216324       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:20:27.224911       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:27.231188       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-810872" podCIDRs=["10.42.0.0/24"]
	I1026 15:20:27.231257       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:20:27.235214       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:20:27.259032       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9701507eb90525b222d9c5a5a2440953b8e8753d167ccd3808eb1581c94234fd] <==
	I1026 15:20:28.496241       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:20:28.671253       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:20:28.772014       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:20:28.772071       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:20:28.772161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:20:28.942151       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:20:28.942209       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:20:28.955163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:20:28.955461       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:20:28.955481       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:28.957187       1 config.go:200] "Starting service config controller"
	I1026 15:20:28.957199       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:20:28.957216       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:20:28.957221       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:20:28.957240       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:20:28.957245       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:20:28.957973       1 config.go:309] "Starting node config controller"
	I1026 15:20:28.957981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:20:28.957987       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:20:29.060796       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:20:29.060834       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:20:29.060868       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [00d24fdac2f6978f4097e0a24b04056d511e930bb6c5e87bd5e05de82a34bd38] <==
	I1026 15:20:20.436251       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:20.442232       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:20:20.442343       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:20.442367       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:20.442384       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 15:20:20.465030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:20:20.466536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:20:20.466718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:20:20.466827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:20:20.467020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:20:20.467145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:20:20.479801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1026 15:20:20.479983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:20:20.480036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:20:20.480089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:20:20.480241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:20:20.480289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:20:20.480334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:20:20.480380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:20:20.480484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:20:20.480537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:20:20.480582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:20:20.480631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:20:20.480691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1026 15:20:22.042867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:20:23 newest-cni-810872 kubelet[1293]: I1026 15:20:23.097331    1293 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-810872"
	Oct 26 15:20:23 newest-cni-810872 kubelet[1293]: I1026 15:20:23.097686    1293 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-810872"
	Oct 26 15:20:23 newest-cni-810872 kubelet[1293]: I1026 15:20:23.674309    1293 apiserver.go:52] "Watching apiserver"
	Oct 26 15:20:23 newest-cni-810872 kubelet[1293]: I1026 15:20:23.727838    1293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 15:20:23 newest-cni-810872 kubelet[1293]: I1026 15:20:23.952402    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:23 newest-cni-810872 kubelet[1293]: E1026 15:20:23.993416    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-810872\" already exists" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:24 newest-cni-810872 kubelet[1293]: I1026 15:20:24.060559    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-810872" podStartSLOduration=1.0605295240000001 podStartE2EDuration="1.060529524s" podCreationTimestamp="2025-10-26 15:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:20:24.019086517 +0000 UTC m=+1.533281387" watchObservedRunningTime="2025-10-26 15:20:24.060529524 +0000 UTC m=+1.574724402"
	Oct 26 15:20:24 newest-cni-810872 kubelet[1293]: I1026 15:20:24.084637    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-810872" podStartSLOduration=3.084600327 podStartE2EDuration="3.084600327s" podCreationTimestamp="2025-10-26 15:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:20:24.069210349 +0000 UTC m=+1.583405219" watchObservedRunningTime="2025-10-26 15:20:24.084600327 +0000 UTC m=+1.598795197"
	Oct 26 15:20:24 newest-cni-810872 kubelet[1293]: I1026 15:20:24.102205    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-810872" podStartSLOduration=1.102184537 podStartE2EDuration="1.102184537s" podCreationTimestamp="2025-10-26 15:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:20:24.086082936 +0000 UTC m=+1.600277814" watchObservedRunningTime="2025-10-26 15:20:24.102184537 +0000 UTC m=+1.616379423"
	Oct 26 15:20:24 newest-cni-810872 kubelet[1293]: I1026 15:20:24.129663    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-810872" podStartSLOduration=1.129641092 podStartE2EDuration="1.129641092s" podCreationTimestamp="2025-10-26 15:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:20:24.103317956 +0000 UTC m=+1.617512834" watchObservedRunningTime="2025-10-26 15:20:24.129641092 +0000 UTC m=+1.643835970"
	Oct 26 15:20:27 newest-cni-810872 kubelet[1293]: I1026 15:20:27.255697    1293 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 15:20:27 newest-cni-810872 kubelet[1293]: I1026 15:20:27.256277    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177118    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-xtables-lock\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177346    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20c61cd-9231-44c6-9861-45cb1d45c060-lib-modules\") pod \"kube-proxy-7rsbv\" (UID: \"d20c61cd-9231-44c6-9861-45cb1d45c060\") " pod="kube-system/kube-proxy-7rsbv"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177503    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d20c61cd-9231-44c6-9861-45cb1d45c060-kube-proxy\") pod \"kube-proxy-7rsbv\" (UID: \"d20c61cd-9231-44c6-9861-45cb1d45c060\") " pod="kube-system/kube-proxy-7rsbv"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177665    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-cni-cfg\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177699    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-lib-modules\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177720    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5qw\" (UniqueName: \"kubernetes.io/projected/52fc9b6a-4117-47b6-8fd4-eff144861784-kube-api-access-mf5qw\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177740    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20c61cd-9231-44c6-9861-45cb1d45c060-xtables-lock\") pod \"kube-proxy-7rsbv\" (UID: \"d20c61cd-9231-44c6-9861-45cb1d45c060\") " pod="kube-system/kube-proxy-7rsbv"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.177756    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7khq\" (UniqueName: \"kubernetes.io/projected/d20c61cd-9231-44c6-9861-45cb1d45c060-kube-api-access-s7khq\") pod \"kube-proxy-7rsbv\" (UID: \"d20c61cd-9231-44c6-9861-45cb1d45c060\") " pod="kube-system/kube-proxy-7rsbv"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: I1026 15:20:28.307018    1293 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: W1026 15:20:28.351675    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/crio-10fc9ca0fea66ea3803fa66fdd1a1a211681ce6a3e988c37c4e39ca663460e5e WatchSource:0}: Error finding container 10fc9ca0fea66ea3803fa66fdd1a1a211681ce6a3e988c37c4e39ca663460e5e: Status 404 returned error can't find the container with id 10fc9ca0fea66ea3803fa66fdd1a1a211681ce6a3e988c37c4e39ca663460e5e
	Oct 26 15:20:28 newest-cni-810872 kubelet[1293]: W1026 15:20:28.355443    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/crio-b11011ca63087197a5e3a7dbd4350a10c0eddb872b570570d8ad65d479332bd3 WatchSource:0}: Error finding container b11011ca63087197a5e3a7dbd4350a10c0eddb872b570570d8ad65d479332bd3: Status 404 returned error can't find the container with id b11011ca63087197a5e3a7dbd4350a10c0eddb872b570570d8ad65d479332bd3
	Oct 26 15:20:29 newest-cni-810872 kubelet[1293]: I1026 15:20:29.047983    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ggnvk" podStartSLOduration=2.047960845 podStartE2EDuration="2.047960845s" podCreationTimestamp="2025-10-26 15:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:20:29.021096969 +0000 UTC m=+6.535291855" watchObservedRunningTime="2025-10-26 15:20:29.047960845 +0000 UTC m=+6.562155715"
	Oct 26 15:20:29 newest-cni-810872 kubelet[1293]: I1026 15:20:29.077370    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7rsbv" podStartSLOduration=2.077349607 podStartE2EDuration="2.077349607s" podCreationTimestamp="2025-10-26 15:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:20:29.048670695 +0000 UTC m=+6.562865565" watchObservedRunningTime="2025-10-26 15:20:29.077349607 +0000 UTC m=+6.591544485"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-810872 -n newest-cni-810872
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-810872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-b49d6 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner: exit status 1 (95.079657ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-b49d6" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-810872 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-810872 --alsologtostderr -v=1: exit status 80 (1.821323445s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-810872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:20:51.327426  920869 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:20:51.327648  920869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:51.327663  920869 out.go:374] Setting ErrFile to fd 2...
	I1026 15:20:51.327668  920869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:51.327949  920869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:20:51.328244  920869 out.go:368] Setting JSON to false
	I1026 15:20:51.328286  920869 mustload.go:65] Loading cluster: newest-cni-810872
	I1026 15:20:51.328738  920869 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:51.329269  920869 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:51.349570  920869 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:51.349886  920869 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:51.416046  920869 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 15:20:51.401519441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:51.416890  920869 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-810872 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:20:51.427259  920869 out.go:179] * Pausing node newest-cni-810872 ... 
	I1026 15:20:51.430341  920869 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:51.430687  920869 ssh_runner.go:195] Run: systemctl --version
	I1026 15:20:51.430747  920869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:51.449312  920869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:51.555774  920869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:20:51.568858  920869 pause.go:52] kubelet running: true
	I1026 15:20:51.568946  920869 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:20:51.784191  920869 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:20:51.784287  920869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:20:51.862227  920869 cri.go:89] found id: "9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5"
	I1026 15:20:51.862252  920869 cri.go:89] found id: "088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3"
	I1026 15:20:51.862257  920869 cri.go:89] found id: "c3c835442195947feaa5c9643bf06f25c54f4301cb28669c53826faac0cd7145"
	I1026 15:20:51.862261  920869 cri.go:89] found id: "3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593"
	I1026 15:20:51.862264  920869 cri.go:89] found id: "629f275ce664cc35af8b347d8b11bb813d2dc6e37a24629561382ad36edfce32"
	I1026 15:20:51.862268  920869 cri.go:89] found id: "4b83b18a315545aca9f139c5b37a51f23e22004c6a5ceae83fddaa2f4eaa4492"
	I1026 15:20:51.862271  920869 cri.go:89] found id: ""
	I1026 15:20:51.862326  920869 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:20:51.885464  920869 retry.go:31] will retry after 283.09899ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:51Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:20:52.168856  920869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:20:52.183088  920869 pause.go:52] kubelet running: false
	I1026 15:20:52.183211  920869 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:20:52.343992  920869 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:20:52.344088  920869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:20:52.427283  920869 cri.go:89] found id: "9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5"
	I1026 15:20:52.427307  920869 cri.go:89] found id: "088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3"
	I1026 15:20:52.427312  920869 cri.go:89] found id: "c3c835442195947feaa5c9643bf06f25c54f4301cb28669c53826faac0cd7145"
	I1026 15:20:52.427316  920869 cri.go:89] found id: "3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593"
	I1026 15:20:52.427320  920869 cri.go:89] found id: "629f275ce664cc35af8b347d8b11bb813d2dc6e37a24629561382ad36edfce32"
	I1026 15:20:52.427331  920869 cri.go:89] found id: "4b83b18a315545aca9f139c5b37a51f23e22004c6a5ceae83fddaa2f4eaa4492"
	I1026 15:20:52.427335  920869 cri.go:89] found id: ""
	I1026 15:20:52.427390  920869 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:20:52.441339  920869 retry.go:31] will retry after 357.023574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:52Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:20:52.798719  920869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:20:52.813892  920869 pause.go:52] kubelet running: false
	I1026 15:20:52.814036  920869 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:20:52.969763  920869 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:20:52.969957  920869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:20:53.046296  920869 cri.go:89] found id: "9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5"
	I1026 15:20:53.046375  920869 cri.go:89] found id: "088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3"
	I1026 15:20:53.046393  920869 cri.go:89] found id: "c3c835442195947feaa5c9643bf06f25c54f4301cb28669c53826faac0cd7145"
	I1026 15:20:53.046411  920869 cri.go:89] found id: "3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593"
	I1026 15:20:53.046446  920869 cri.go:89] found id: "629f275ce664cc35af8b347d8b11bb813d2dc6e37a24629561382ad36edfce32"
	I1026 15:20:53.046469  920869 cri.go:89] found id: "4b83b18a315545aca9f139c5b37a51f23e22004c6a5ceae83fddaa2f4eaa4492"
	I1026 15:20:53.046486  920869 cri.go:89] found id: ""
	I1026 15:20:53.046568  920869 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:20:53.062855  920869 out.go:203] 
	W1026 15:20:53.065984  920869 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:20:53.066014  920869 out.go:285] * 
	* 
	W1026 15:20:53.073260  920869 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:20:53.076284  920869 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-810872 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-810872
helpers_test.go:243: (dbg) docker inspect newest-cni-810872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1",
	        "Created": "2025-10-26T15:19:50.863323675Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 919089,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:20:35.225485723Z",
	            "FinishedAt": "2025-10-26T15:20:34.336689855Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/hosts",
	        "LogPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1-json.log",
	        "Name": "/newest-cni-810872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-810872:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-810872",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1",
	                "LowerDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-810872",
	                "Source": "/var/lib/docker/volumes/newest-cni-810872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-810872",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-810872",
	                "name.minikube.sigs.k8s.io": "newest-cni-810872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3434d43bb58d98d1091f600bfd633002a91aee9ff8b266efa0fde0d05a3085d",
	            "SandboxKey": "/var/run/docker/netns/e3434d43bb58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-810872": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:d5:4c:bc:cd:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd72f372b9d59036c2bf74ba038a42769a6a6fe23c0e4f9a4a483ae08bcd16c7",
	                    "EndpointID": "6aed6e4a2a5370a45baef4a05fcb3eb89fe7211ec43414f3069ae96f2d29e4d7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-810872",
	                        "fcebd0173001"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872: exit status 2 (360.121643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-810872 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-810872 logs -n 25: (1.113275353s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                                                                                                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                                                                                                    │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-494684 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-494684 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ stop    │ -p newest-cni-810872 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-810872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ image   │ newest-cni-810872 image list --format=json                                                                                                                                                                                                    │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p newest-cni-810872 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:20:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:20:34.926623  918963 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:20:34.926793  918963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:34.926803  918963 out.go:374] Setting ErrFile to fd 2...
	I1026 15:20:34.926808  918963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:34.927094  918963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:20:34.927507  918963 out.go:368] Setting JSON to false
	I1026 15:20:34.928539  918963 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18187,"bootTime":1761473848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:20:34.928607  918963 start.go:141] virtualization:  
	I1026 15:20:34.931818  918963 out.go:179] * [newest-cni-810872] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:20:34.935779  918963 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:20:34.935867  918963 notify.go:220] Checking for updates...
	I1026 15:20:34.941616  918963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:20:34.944563  918963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:34.947382  918963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:20:34.950248  918963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:20:34.953163  918963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:20:34.956431  918963 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:34.957048  918963 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:20:34.990266  918963 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:20:34.990382  918963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:35.055227  918963 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:35.045259475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:35.055344  918963 docker.go:318] overlay module found
	I1026 15:20:35.058562  918963 out.go:179] * Using the docker driver based on existing profile
	I1026 15:20:35.061565  918963 start.go:305] selected driver: docker
	I1026 15:20:35.061585  918963 start.go:925] validating driver "docker" against &{Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:35.061695  918963 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:20:35.062453  918963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:35.126047  918963 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:35.116428846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:35.126416  918963 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:20:35.126454  918963 cni.go:84] Creating CNI manager for ""
	I1026 15:20:35.126520  918963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:35.126562  918963 start.go:349] cluster config:
	{Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:35.129822  918963 out.go:179] * Starting "newest-cni-810872" primary control-plane node in "newest-cni-810872" cluster
	I1026 15:20:35.132792  918963 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:20:35.135839  918963 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:20:35.138858  918963 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:20:35.138947  918963 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:20:35.138962  918963 cache.go:58] Caching tarball of preloaded images
	I1026 15:20:35.138971  918963 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:20:35.139059  918963 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:20:35.139069  918963 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:20:35.139186  918963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/config.json ...
	I1026 15:20:35.161094  918963 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:20:35.161120  918963 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:20:35.161142  918963 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:20:35.161170  918963 start.go:360] acquireMachinesLock for newest-cni-810872: {Name:mk50aa66027ddaa44fbf43aa11b8f9f4974507d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:20:35.161244  918963 start.go:364] duration metric: took 44.964µs to acquireMachinesLock for "newest-cni-810872"
	I1026 15:20:35.161269  918963 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:20:35.161280  918963 fix.go:54] fixHost starting: 
	I1026 15:20:35.161539  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:35.183475  918963 fix.go:112] recreateIfNeeded on newest-cni-810872: state=Stopped err=<nil>
	W1026 15:20:35.183512  918963 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 15:20:33.614775  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:35.614939  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:35.186775  918963 out.go:252] * Restarting existing docker container for "newest-cni-810872" ...
	I1026 15:20:35.186925  918963 cli_runner.go:164] Run: docker start newest-cni-810872
	I1026 15:20:35.467929  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:35.493535  918963 kic.go:430] container "newest-cni-810872" state is running.
	I1026 15:20:35.493927  918963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:20:35.520832  918963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/config.json ...
	I1026 15:20:35.521104  918963 machine.go:93] provisionDockerMachine start ...
	I1026 15:20:35.521172  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:35.549170  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:35.549502  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:35.549511  918963 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:20:35.550865  918963 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:20:38.708328  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-810872
	
	I1026 15:20:38.708359  918963 ubuntu.go:182] provisioning hostname "newest-cni-810872"
	I1026 15:20:38.708424  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:38.726493  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:38.726799  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:38.726815  918963 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-810872 && echo "newest-cni-810872" | sudo tee /etc/hostname
	I1026 15:20:38.886632  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-810872
	
	I1026 15:20:38.886719  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:38.904029  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:38.904344  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:38.904367  918963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-810872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-810872/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-810872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:20:39.057140  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:20:39.057166  918963 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:20:39.057187  918963 ubuntu.go:190] setting up certificates
	I1026 15:20:39.057198  918963 provision.go:84] configureAuth start
	I1026 15:20:39.057258  918963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:20:39.075167  918963 provision.go:143] copyHostCerts
	I1026 15:20:39.075253  918963 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:20:39.075276  918963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:20:39.075352  918963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:20:39.075454  918963 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:20:39.075463  918963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:20:39.075491  918963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:20:39.075554  918963 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:20:39.075564  918963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:20:39.075589  918963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:20:39.075642  918963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.newest-cni-810872 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-810872]
	I1026 15:20:39.628232  918963 provision.go:177] copyRemoteCerts
	I1026 15:20:39.628299  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:20:39.628350  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:39.646158  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:39.756782  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:20:39.774971  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:20:39.793209  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:20:39.811355  918963 provision.go:87] duration metric: took 754.133031ms to configureAuth
	I1026 15:20:39.811424  918963 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:20:39.811640  918963 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:39.811758  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:39.829752  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:39.830076  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:39.830098  918963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1026 15:20:38.115576  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:40.616027  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:40.150463  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:20:40.150551  918963 machine.go:96] duration metric: took 4.629436671s to provisionDockerMachine
	I1026 15:20:40.150578  918963 start.go:293] postStartSetup for "newest-cni-810872" (driver="docker")
	I1026 15:20:40.150614  918963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:20:40.150728  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:20:40.150792  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.169796  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.281517  918963 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:20:40.285327  918963 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:20:40.285368  918963 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:20:40.285381  918963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:20:40.285436  918963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:20:40.285517  918963 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:20:40.285635  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:20:40.293254  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:20:40.314726  918963 start.go:296] duration metric: took 164.10667ms for postStartSetup
	I1026 15:20:40.314879  918963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:20:40.314965  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.334347  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.434150  918963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:20:40.439110  918963 fix.go:56] duration metric: took 5.277822656s for fixHost
	I1026 15:20:40.439187  918963 start.go:83] releasing machines lock for "newest-cni-810872", held for 5.27792966s
	I1026 15:20:40.439290  918963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:20:40.457059  918963 ssh_runner.go:195] Run: cat /version.json
	I1026 15:20:40.457097  918963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:20:40.457113  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.457160  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.480637  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.487259  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.672813  918963 ssh_runner.go:195] Run: systemctl --version
	I1026 15:20:40.679861  918963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:20:40.724268  918963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:20:40.729370  918963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:20:40.729452  918963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:20:40.737962  918963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:20:40.737991  918963 start.go:495] detecting cgroup driver to use...
	I1026 15:20:40.738023  918963 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:20:40.738085  918963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:20:40.753124  918963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:20:40.766521  918963 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:20:40.766622  918963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:20:40.783072  918963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:20:40.796322  918963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:20:40.921668  918963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:20:41.044939  918963 docker.go:234] disabling docker service ...
	I1026 15:20:41.045066  918963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:20:41.061518  918963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:20:41.080891  918963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:20:41.221239  918963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:20:41.354910  918963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:20:41.368477  918963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:20:41.383031  918963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:20:41.383144  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.392981  918963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:20:41.393105  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.402728  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.413420  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.430310  918963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:20:41.439250  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.448879  918963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.457886  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.467199  918963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:20:41.475226  918963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:20:41.483106  918963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:41.617484  918963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:20:41.777979  918963 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:20:41.778049  918963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:20:41.782500  918963 start.go:563] Will wait 60s for crictl version
	I1026 15:20:41.782646  918963 ssh_runner.go:195] Run: which crictl
	I1026 15:20:41.786597  918963 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:20:41.810990  918963 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:20:41.811209  918963 ssh_runner.go:195] Run: crio --version
	I1026 15:20:41.841677  918963 ssh_runner.go:195] Run: crio --version
	I1026 15:20:41.892090  918963 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:20:41.895033  918963 cli_runner.go:164] Run: docker network inspect newest-cni-810872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:20:41.913182  918963 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:20:41.917095  918963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:20:41.929946  918963 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:20:41.932895  918963 kubeadm.go:883] updating cluster {Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:20:41.933046  918963 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:20:41.933151  918963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:20:41.967656  918963 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:20:41.967684  918963 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:20:41.967740  918963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:20:41.998615  918963 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:20:41.998643  918963 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:20:41.998650  918963 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:20:41.998748  918963 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-810872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:20:41.998842  918963 ssh_runner.go:195] Run: crio config
	I1026 15:20:42.073705  918963 cni.go:84] Creating CNI manager for ""
	I1026 15:20:42.073742  918963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:42.073771  918963 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:20:42.073797  918963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-810872 NodeName:newest-cni-810872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:20:42.073948  918963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-810872"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:20:42.074032  918963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:20:42.084550  918963 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:20:42.084646  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:20:42.094942  918963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:20:42.116341  918963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:20:42.140776  918963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 15:20:42.158422  918963 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:20:42.163459  918963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:20:42.177517  918963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:42.316525  918963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:20:42.337496  918963 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872 for IP: 192.168.85.2
	I1026 15:20:42.337587  918963 certs.go:195] generating shared ca certs ...
	I1026 15:20:42.337619  918963 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:42.337818  918963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:20:42.337887  918963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:20:42.337909  918963 certs.go:257] generating profile certs ...
	I1026 15:20:42.338053  918963 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.key
	I1026 15:20:42.338167  918963 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key.4ba50940
	I1026 15:20:42.338262  918963 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key
	I1026 15:20:42.338416  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:20:42.338469  918963 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:20:42.338492  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:20:42.338564  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:20:42.338616  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:20:42.338673  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:20:42.338770  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:20:42.339402  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:20:42.368485  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:20:42.388324  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:20:42.409155  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:20:42.441639  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:20:42.460800  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:20:42.491509  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:20:42.517765  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:20:42.538370  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:20:42.560901  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:20:42.583257  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:20:42.604334  918963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:20:42.628896  918963 ssh_runner.go:195] Run: openssl version
	I1026 15:20:42.644505  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:20:42.654305  918963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:20:42.659564  918963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:20:42.659645  918963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:20:42.707754  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:20:42.716310  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:20:42.725333  918963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:42.729425  918963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:42.729541  918963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:42.772893  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:20:42.782012  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:20:42.791031  918963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:20:42.795689  918963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:20:42.795797  918963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:20:42.837604  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:20:42.846188  918963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:20:42.850830  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:20:42.893843  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:20:42.936160  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:20:42.978400  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:20:43.024991  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:20:43.071276  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:20:43.120536  918963 kubeadm.go:400] StartCluster: {Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:43.120736  918963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:20:43.120833  918963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:20:43.187827  918963 cri.go:89] found id: "3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593"
	I1026 15:20:43.187901  918963 cri.go:89] found id: ""
	I1026 15:20:43.188147  918963 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:20:43.218705  918963 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:43Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:20:43.218792  918963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:20:43.238786  918963 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:20:43.238856  918963 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:20:43.238948  918963 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:20:43.253154  918963 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:20:43.253896  918963 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-810872" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:43.254273  918963 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-810872" cluster setting kubeconfig missing "newest-cni-810872" context setting]
	I1026 15:20:43.254878  918963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:43.256646  918963 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:20:43.290582  918963 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 15:20:43.290668  918963 kubeadm.go:601] duration metric: took 51.792921ms to restartPrimaryControlPlane
	I1026 15:20:43.290691  918963 kubeadm.go:402] duration metric: took 170.164632ms to StartCluster
	I1026 15:20:43.290735  918963 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:43.290832  918963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:43.292012  918963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:43.292347  918963 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:20:43.293026  918963 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:43.293175  918963 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:20:43.293270  918963 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-810872"
	I1026 15:20:43.293284  918963 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-810872"
	W1026 15:20:43.293291  918963 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:20:43.293315  918963 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:43.293865  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.294087  918963 addons.go:69] Setting dashboard=true in profile "newest-cni-810872"
	I1026 15:20:43.294119  918963 addons.go:238] Setting addon dashboard=true in "newest-cni-810872"
	W1026 15:20:43.294139  918963 addons.go:247] addon dashboard should already be in state true
	I1026 15:20:43.294190  918963 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:43.294670  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.298962  918963 addons.go:69] Setting default-storageclass=true in profile "newest-cni-810872"
	I1026 15:20:43.299365  918963 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-810872"
	I1026 15:20:43.300710  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.302414  918963 out.go:179] * Verifying Kubernetes components...
	I1026 15:20:43.306386  918963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:43.357333  918963 addons.go:238] Setting addon default-storageclass=true in "newest-cni-810872"
	W1026 15:20:43.357358  918963 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:20:43.357383  918963 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:43.357807  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.364784  918963 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:20:43.364870  918963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:20:43.367994  918963 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:43.368019  918963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:20:43.368089  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:43.371311  918963 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:20:43.374313  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:20:43.374339  918963 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:20:43.374417  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:43.408875  918963 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:43.408912  918963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:20:43.408984  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:43.439491  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:43.449057  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:43.460641  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:43.699680  918963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:20:43.732041  918963 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:20:43.732187  918963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:20:43.732648  918963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:43.762271  918963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:43.797164  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:20:43.797190  918963 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:20:43.801790  918963 api_server.go:72] duration metric: took 509.352121ms to wait for apiserver process to appear ...
	I1026 15:20:43.801819  918963 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:20:43.801838  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:43.802161  918963 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 15:20:43.834566  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:20:43.834592  918963 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:20:43.866118  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:20:43.866143  918963 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:20:43.943386  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:20:43.943409  918963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:20:44.022620  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:20:44.022647  918963 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:20:44.059187  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:20:44.059218  918963 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:20:44.086734  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:20:44.086802  918963 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:20:44.119973  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:20:44.120044  918963 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:20:44.141237  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:20:44.141311  918963 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:20:44.163421  918963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:20:44.302743  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1026 15:20:43.114776  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:45.115214  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:47.115444  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:47.616443  914177 pod_ready.go:94] pod "coredns-66bc5c9577-zm8vb" is "Ready"
	I1026 15:20:47.616510  914177 pod_ready.go:86] duration metric: took 36.007779577s for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.619627  914177 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.626283  914177 pod_ready.go:94] pod "etcd-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:47.626364  914177 pod_ready.go:86] duration metric: took 6.655392ms for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.629521  914177 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.640804  914177 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:47.640889  914177 pod_ready.go:86] duration metric: took 11.292956ms for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.643685  914177 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.812158  914177 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:47.812239  914177 pod_ready.go:86] duration metric: took 168.478952ms for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:48.013331  914177 pod_ready.go:83] waiting for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:48.412730  914177 pod_ready.go:94] pod "kube-proxy-nbcd6" is "Ready"
	I1026 15:20:48.412806  914177 pod_ready.go:86] duration metric: took 399.388552ms for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:48.612713  914177 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:49.012887  914177 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:49.012977  914177 pod_ready.go:86] duration metric: took 400.197127ms for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:49.013005  914177 pod_ready.go:40] duration metric: took 37.416377925s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:20:49.098321  914177 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:20:49.101428  914177 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-494684" cluster and "default" namespace by default
	I1026 15:20:48.470198  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:20:48.470234  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:20:48.470249  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:48.648395  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:20:48.648421  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:20:48.802721  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:48.863367  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:48.863451  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:49.301948  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:49.335987  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:49.336022  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:49.802177  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:49.820487  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:49.820520  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:50.145693  918963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.412991501s)
	I1026 15:20:50.145851  918963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.982332878s)
	I1026 15:20:50.146048  918963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.383403583s)
	I1026 15:20:50.149001  918963 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-810872 addons enable metrics-server
	
	I1026 15:20:50.176755  918963 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:20:50.179730  918963 addons.go:514] duration metric: took 6.886538561s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:20:50.302222  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:50.311560  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:20:50.312896  918963 api_server.go:141] control plane version: v1.34.1
	I1026 15:20:50.312929  918963 api_server.go:131] duration metric: took 6.511103592s to wait for apiserver health ...
	I1026 15:20:50.312939  918963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:20:50.317004  918963 system_pods.go:59] 8 kube-system pods found
	I1026 15:20:50.317041  918963 system_pods.go:61] "coredns-66bc5c9577-b49d6" [0cc1ad2e-be8a-43fb-baed-3d411550f34c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:20:50.317050  918963 system_pods.go:61] "etcd-newest-cni-810872" [784475d8-6ee3-45c9-a0cc-55d18ee84177] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:20:50.317059  918963 system_pods.go:61] "kindnet-ggnvk" [52fc9b6a-4117-47b6-8fd4-eff144861784] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:20:50.317066  918963 system_pods.go:61] "kube-apiserver-newest-cni-810872" [cdd8bae8-4574-497b-a540-57831768a16b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:20:50.317073  918963 system_pods.go:61] "kube-controller-manager-newest-cni-810872" [96ea627b-92e4-448c-8621-2129603a8ce3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:20:50.317085  918963 system_pods.go:61] "kube-proxy-7rsbv" [d20c61cd-9231-44c6-9861-45cb1d45c060] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:20:50.317092  918963 system_pods.go:61] "kube-scheduler-newest-cni-810872" [17a3ef6c-201f-4fdb-b45f-6e3b2614a3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:20:50.317104  918963 system_pods.go:61] "storage-provisioner" [6a816eb1-59c8-4ed0-9087-4fb271f4608b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:20:50.317111  918963 system_pods.go:74] duration metric: took 4.139319ms to wait for pod list to return data ...
	I1026 15:20:50.317121  918963 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:20:50.320152  918963 default_sa.go:45] found service account: "default"
	I1026 15:20:50.320179  918963 default_sa.go:55] duration metric: took 3.051635ms for default service account to be created ...
	I1026 15:20:50.320191  918963 kubeadm.go:586] duration metric: took 7.027758275s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:20:50.320208  918963 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:20:50.322729  918963 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:20:50.322763  918963 node_conditions.go:123] node cpu capacity is 2
	I1026 15:20:50.322776  918963 node_conditions.go:105] duration metric: took 2.561563ms to run NodePressure ...
	I1026 15:20:50.322788  918963 start.go:241] waiting for startup goroutines ...
	I1026 15:20:50.322796  918963 start.go:246] waiting for cluster config update ...
	I1026 15:20:50.322811  918963 start.go:255] writing updated cluster config ...
	I1026 15:20:50.323113  918963 ssh_runner.go:195] Run: rm -f paused
	I1026 15:20:50.429104  918963 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:20:50.432795  918963 out.go:179] * Done! kubectl is now configured to use "newest-cni-810872" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.339722633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.348506795Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=efae6dd2-a857-4977-bf33-19436ede827a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.35665223Z" level=info msg="Ran pod sandbox 4745de2c92ffa3b1745379d9a9bc0cf40b932f74be71ee063971dc2cb00248fd with infra container: kube-system/kube-proxy-7rsbv/POD" id=efae6dd2-a857-4977-bf33-19436ede827a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.358870978Z" level=info msg="Running pod sandbox: kube-system/kindnet-ggnvk/POD" id=f39166a8-a274-48c2-885f-5d890f0a7305 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.358940854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.362335763Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f39166a8-a274-48c2-885f-5d890f0a7305 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.368026361Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b9a47c40-6d05-4a0b-bed1-f116f1200693 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.371104212Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=33d3c164-1477-42d5-b965-e21c15b29636 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.374168155Z" level=info msg="Creating container: kube-system/kube-proxy-7rsbv/kube-proxy" id=f098c8e8-c8fc-411c-ae8e-339dda211188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.3742966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.37746255Z" level=info msg="Ran pod sandbox 66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38 with infra container: kube-system/kindnet-ggnvk/POD" id=f39166a8-a274-48c2-885f-5d890f0a7305 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.382954221Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=af656654-2125-4caf-8346-6f4ea8fa4c99 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.388079052Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0f371136-1a36-446d-88cc-5eff879c6b92 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.389912787Z" level=info msg="Creating container: kube-system/kindnet-ggnvk/kindnet-cni" id=290ebf79-dbed-497f-a74c-7c38e961bea3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.390039033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.39166822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.394396496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.395320387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.396549069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.453729323Z" level=info msg="Created container 088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3: kube-system/kube-proxy-7rsbv/kube-proxy" id=f098c8e8-c8fc-411c-ae8e-339dda211188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.456508184Z" level=info msg="Created container 9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5: kube-system/kindnet-ggnvk/kindnet-cni" id=290ebf79-dbed-497f-a74c-7c38e961bea3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.475453048Z" level=info msg="Starting container: 9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5" id=9cabdf3c-5922-4b90-82b2-3791c1cb6914 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.478425183Z" level=info msg="Starting container: 088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3" id=1d5dc104-7d02-4f63-8419-4ceaffb8208b name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.486061747Z" level=info msg="Started container" PID=1061 containerID=9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5 description=kube-system/kindnet-ggnvk/kindnet-cni id=9cabdf3c-5922-4b90-82b2-3791c1cb6914 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.493013799Z" level=info msg="Started container" PID=1060 containerID=088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3 description=kube-system/kube-proxy-7rsbv/kube-proxy id=1d5dc104-7d02-4f63-8419-4ceaffb8208b name=/runtime.v1.RuntimeService/StartContainer sandboxID=4745de2c92ffa3b1745379d9a9bc0cf40b932f74be71ee063971dc2cb00248fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9f5cdb4d4577f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               1                   66b54d3edb981       kindnet-ggnvk                               kube-system
	088e65f93c8fc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                1                   4745de2c92ffa       kube-proxy-7rsbv                            kube-system
	c3c8354421959       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   ccb1cd67f0a05       etcd-newest-cni-810872                      kube-system
	3f0eac97cebef       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   5726231d02b21       kube-controller-manager-newest-cni-810872   kube-system
	629f275ce664c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   32b83e56aea85       kube-apiserver-newest-cni-810872            kube-system
	4b83b18a31554       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   95694fdd3371f       kube-scheduler-newest-cni-810872            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-810872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-810872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=newest-cni-810872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_20_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:20:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-810872
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-810872
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cb876d54-b19f-49ca-b5c7-700f084fb6f3
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-810872                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-ggnvk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-810872             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-810872    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-7rsbv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-810872             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-810872 event: Registered Node newest-cni-810872 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-810872 event: Registered Node newest-cni-810872 in Controller
	
	
	==> dmesg <==
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	[Oct26 15:20] overlayfs: idmapped layers are currently not supported
	[  +9.178014] overlayfs: idmapped layers are currently not supported
	[ +33.140474] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c3c835442195947feaa5c9643bf06f25c54f4301cb28669c53826faac0cd7145] <==
	{"level":"warn","ts":"2025-10-26T15:20:47.112605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.133845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.160121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.177165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.209973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.217116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.235270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.282573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.294176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.310426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.332953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.347123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.381873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.425512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.440785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.457558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.483175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.494296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.516852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.530641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.552224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.572501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.600867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.629931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.726076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:20:54 up  5:03,  0 user,  load average: 4.17, 3.86, 3.24
	Linux newest-cni-810872 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5] <==
	I1026 15:20:50.630824       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:20:50.631373       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:20:50.631530       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:20:50.631572       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:20:50.631612       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:20:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:20:50.850499       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:20:50.857300       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:20:50.857410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:20:50.857972       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [629f275ce664cc35af8b347d8b11bb813d2dc6e37a24629561382ad36edfce32] <==
	I1026 15:20:48.844878       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:20:48.844952       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:20:48.850038       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:20:48.852839       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:20:48.853510       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:20:48.853591       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:20:48.866215       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:20:48.866326       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:20:48.866477       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:20:48.866492       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:20:48.866499       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:20:48.866505       1 cache.go:39] Caches are synced for autoregister controller
	E1026 15:20:48.929508       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:20:49.370727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:20:49.598373       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:20:49.750223       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:20:49.797502       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:20:49.826607       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:20:49.858590       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:20:49.955720       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.228.232"}
	I1026 15:20:49.975836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.223.157"}
	I1026 15:20:52.129503       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:20:52.434663       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:20:52.579591       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:52.630811       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593] <==
	I1026 15:20:52.028938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:52.029464       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:20:52.030902       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:20:52.032150       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:20:52.036053       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:52.039227       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:20:52.042448       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:20:52.050047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:20:52.057398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:52.062545       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:20:52.066948       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:20:52.070447       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:20:52.072831       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:20:52.073074       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:20:52.073536       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:20:52.073699       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:20:52.076173       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:20:52.079761       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:52.079792       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:20:52.079800       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:20:52.085385       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:20:52.085483       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:20:52.085571       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-810872"
	I1026 15:20:52.085621       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:20:52.090238       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	
	
	==> kube-proxy [088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3] <==
	I1026 15:20:50.590943       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:20:50.804274       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:20:50.908945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:20:50.909066       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:20:50.909196       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:20:50.984038       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:20:50.984099       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:20:50.989776       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:20:50.990254       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:20:50.990281       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:50.992498       1 config.go:200] "Starting service config controller"
	I1026 15:20:50.992515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:20:50.992533       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:20:50.992537       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:20:50.992548       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:20:50.992552       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:20:50.998433       1 config.go:309] "Starting node config controller"
	I1026 15:20:50.998461       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:20:50.998469       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:20:51.092943       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:20:51.092983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:20:51.093030       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b83b18a315545aca9f139c5b37a51f23e22004c6a5ceae83fddaa2f4eaa4492] <==
	I1026 15:20:46.004732       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:20:48.496105       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:20:48.496140       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:20:48.496151       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:20:48.496167       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:20:48.844558       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:20:48.844592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:48.856384       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:20:48.856476       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:48.856495       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:48.856510       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:20:48.957878       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.715446     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-xtables-lock\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.715465     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-cni-cfg\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.715482     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-lib-modules\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.757521     724 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-newest-cni-810872\" is forbidden: User \"system:node:newest-cni-810872\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-810872' and this object" podUID="570a64448d1a0176d967fb314f521fba" pod="kube-system/kube-controller-manager-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.788919     724 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-newest-cni-810872\" is forbidden: User \"system:node:newest-cni-810872\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-810872' and this object" podUID="6570284b6b81aecad4d0356ab9d5ec89" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.910095     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-810872\" already exists" pod="kube-system/kube-controller-manager-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.910140     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.956114     724 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.956230     724 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.956274     724 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.957345     724 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.957594     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-810872\" already exists" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.957739     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.008491     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-810872\" already exists" pod="kube-system/etcd-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: I1026 15:20:49.008556     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.102330     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-810872\" already exists" pod="kube-system/kube-apiserver-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.560782     724 projected.go:196] Error preparing data for projected volume kube-api-access-mf5qw for pod kube-system/kindnet-ggnvk: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.560924     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52fc9b6a-4117-47b6-8fd4-eff144861784-kube-api-access-mf5qw podName:52fc9b6a-4117-47b6-8fd4-eff144861784 nodeName:}" failed. No retries permitted until 2025-10-26 15:20:50.060886294 +0000 UTC m=+7.723457779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mf5qw" (UniqueName: "kubernetes.io/projected/52fc9b6a-4117-47b6-8fd4-eff144861784-kube-api-access-mf5qw") pod "kindnet-ggnvk" (UID: "52fc9b6a-4117-47b6-8fd4-eff144861784") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.560988     724 projected.go:196] Error preparing data for projected volume kube-api-access-s7khq for pod kube-system/kube-proxy-7rsbv: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.561046     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d20c61cd-9231-44c6-9861-45cb1d45c060-kube-api-access-s7khq podName:d20c61cd-9231-44c6-9861-45cb1d45c060 nodeName:}" failed. No retries permitted until 2025-10-26 15:20:50.061032478 +0000 UTC m=+7.723603865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s7khq" (UniqueName: "kubernetes.io/projected/d20c61cd-9231-44c6-9861-45cb1d45c060-kube-api-access-s7khq") pod "kube-proxy-7rsbv" (UID: "d20c61cd-9231-44c6-9861-45cb1d45c060") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:50 newest-cni-810872 kubelet[724]: I1026 15:20:50.165642     724 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 15:20:50 newest-cni-810872 kubelet[724]: W1026 15:20:50.372037     724 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/crio-66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38 WatchSource:0}: Error finding container 66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38: Status 404 returned error can't find the container with id 66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38
	Oct 26 15:20:51 newest-cni-810872 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:20:51 newest-cni-810872 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:20:51 newest-cni-810872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-810872 -n newest-cni-810872
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-810872 -n newest-cni-810872: exit status 2 (391.579259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-810872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb: exit status 1 (92.136001ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-b49d6" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hxld6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sbzbb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-810872
helpers_test.go:243: (dbg) docker inspect newest-cni-810872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1",
	        "Created": "2025-10-26T15:19:50.863323675Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 919089,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:20:35.225485723Z",
	            "FinishedAt": "2025-10-26T15:20:34.336689855Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/hosts",
	        "LogPath": "/var/lib/docker/containers/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1-json.log",
	        "Name": "/newest-cni-810872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-810872:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-810872",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1",
	                "LowerDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd7ae82453e52662053e8888e322141529a6ea56f5351a3455777c5505ff92fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-810872",
	                "Source": "/var/lib/docker/volumes/newest-cni-810872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-810872",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-810872",
	                "name.minikube.sigs.k8s.io": "newest-cni-810872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3434d43bb58d98d1091f600bfd633002a91aee9ff8b266efa0fde0d05a3085d",
	            "SandboxKey": "/var/run/docker/netns/e3434d43bb58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-810872": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:d5:4c:bc:cd:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd72f372b9d59036c2bf74ba038a42769a6a6fe23c0e4f9a4a483ae08bcd16c7",
	                    "EndpointID": "6aed6e4a2a5370a45baef4a05fcb3eb89fe7211ec43414f3069ae96f2d29e4d7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-810872",
	                        "fcebd0173001"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872: exit status 2 (370.965141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-810872 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-810872 logs -n 25: (1.139675345s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:16 UTC │ 26 Oct 25 15:18 UTC │
	│ image   │ embed-certs-018497 image list --format=json                                                                                                                                                                                                   │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ pause   │ -p embed-certs-018497 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:18 UTC │
	│ delete  │ -p embed-certs-018497                                                                                                                                                                                                                         │ embed-certs-018497           │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                                                                                                    │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-494684 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-494684 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ stop    │ -p newest-cni-810872 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-810872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ image   │ newest-cni-810872 image list --format=json                                                                                                                                                                                                    │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p newest-cni-810872 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:20:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:20:34.926623  918963 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:20:34.926793  918963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:34.926803  918963 out.go:374] Setting ErrFile to fd 2...
	I1026 15:20:34.926808  918963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:34.927094  918963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:20:34.927507  918963 out.go:368] Setting JSON to false
	I1026 15:20:34.928539  918963 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18187,"bootTime":1761473848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:20:34.928607  918963 start.go:141] virtualization:  
	I1026 15:20:34.931818  918963 out.go:179] * [newest-cni-810872] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:20:34.935779  918963 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:20:34.935867  918963 notify.go:220] Checking for updates...
	I1026 15:20:34.941616  918963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:20:34.944563  918963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:34.947382  918963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:20:34.950248  918963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:20:34.953163  918963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:20:34.956431  918963 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:34.957048  918963 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:20:34.990266  918963 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:20:34.990382  918963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:35.055227  918963 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:35.045259475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:35.055344  918963 docker.go:318] overlay module found
	I1026 15:20:35.058562  918963 out.go:179] * Using the docker driver based on existing profile
	I1026 15:20:35.061565  918963 start.go:305] selected driver: docker
	I1026 15:20:35.061585  918963 start.go:925] validating driver "docker" against &{Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:35.061695  918963 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:20:35.062453  918963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:35.126047  918963 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:35.116428846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:35.126416  918963 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:20:35.126454  918963 cni.go:84] Creating CNI manager for ""
	I1026 15:20:35.126520  918963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:35.126562  918963 start.go:349] cluster config:
	{Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:35.129822  918963 out.go:179] * Starting "newest-cni-810872" primary control-plane node in "newest-cni-810872" cluster
	I1026 15:20:35.132792  918963 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:20:35.135839  918963 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:20:35.138858  918963 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:20:35.138947  918963 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:20:35.138962  918963 cache.go:58] Caching tarball of preloaded images
	I1026 15:20:35.138971  918963 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:20:35.139059  918963 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:20:35.139069  918963 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:20:35.139186  918963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/config.json ...
	I1026 15:20:35.161094  918963 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:20:35.161120  918963 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:20:35.161142  918963 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:20:35.161170  918963 start.go:360] acquireMachinesLock for newest-cni-810872: {Name:mk50aa66027ddaa44fbf43aa11b8f9f4974507d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:20:35.161244  918963 start.go:364] duration metric: took 44.964µs to acquireMachinesLock for "newest-cni-810872"
	I1026 15:20:35.161269  918963 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:20:35.161280  918963 fix.go:54] fixHost starting: 
	I1026 15:20:35.161539  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:35.183475  918963 fix.go:112] recreateIfNeeded on newest-cni-810872: state=Stopped err=<nil>
	W1026 15:20:35.183512  918963 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 15:20:33.614775  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:35.614939  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:35.186775  918963 out.go:252] * Restarting existing docker container for "newest-cni-810872" ...
	I1026 15:20:35.186925  918963 cli_runner.go:164] Run: docker start newest-cni-810872
	I1026 15:20:35.467929  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:35.493535  918963 kic.go:430] container "newest-cni-810872" state is running.
	I1026 15:20:35.493927  918963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:20:35.520832  918963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/config.json ...
	I1026 15:20:35.521104  918963 machine.go:93] provisionDockerMachine start ...
	I1026 15:20:35.521172  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:35.549170  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:35.549502  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:35.549511  918963 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:20:35.550865  918963 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 15:20:38.708328  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-810872
	
	I1026 15:20:38.708359  918963 ubuntu.go:182] provisioning hostname "newest-cni-810872"
	I1026 15:20:38.708424  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:38.726493  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:38.726799  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:38.726815  918963 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-810872 && echo "newest-cni-810872" | sudo tee /etc/hostname
	I1026 15:20:38.886632  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-810872
	
	I1026 15:20:38.886719  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:38.904029  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:38.904344  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:38.904367  918963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-810872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-810872/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-810872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:20:39.057140  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:20:39.057166  918963 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-713593/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-713593/.minikube}
	I1026 15:20:39.057187  918963 ubuntu.go:190] setting up certificates
	I1026 15:20:39.057198  918963 provision.go:84] configureAuth start
	I1026 15:20:39.057258  918963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:20:39.075167  918963 provision.go:143] copyHostCerts
	I1026 15:20:39.075253  918963 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem, removing ...
	I1026 15:20:39.075276  918963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem
	I1026 15:20:39.075352  918963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/ca.pem (1082 bytes)
	I1026 15:20:39.075454  918963 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem, removing ...
	I1026 15:20:39.075463  918963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem
	I1026 15:20:39.075491  918963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/cert.pem (1123 bytes)
	I1026 15:20:39.075554  918963 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem, removing ...
	I1026 15:20:39.075564  918963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem
	I1026 15:20:39.075589  918963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-713593/.minikube/key.pem (1675 bytes)
	I1026 15:20:39.075642  918963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem org=jenkins.newest-cni-810872 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-810872]
	I1026 15:20:39.628232  918963 provision.go:177] copyRemoteCerts
	I1026 15:20:39.628299  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:20:39.628350  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:39.646158  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:39.756782  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:20:39.774971  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:20:39.793209  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:20:39.811355  918963 provision.go:87] duration metric: took 754.133031ms to configureAuth
	I1026 15:20:39.811424  918963 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:20:39.811640  918963 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:39.811758  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:39.829752  918963 main.go:141] libmachine: Using SSH client type: native
	I1026 15:20:39.830076  918963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef220] 0x3f19e0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:20:39.830098  918963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1026 15:20:38.115576  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:40.616027  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:40.150463  918963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:20:40.150551  918963 machine.go:96] duration metric: took 4.629436671s to provisionDockerMachine
	I1026 15:20:40.150578  918963 start.go:293] postStartSetup for "newest-cni-810872" (driver="docker")
	I1026 15:20:40.150614  918963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:20:40.150728  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:20:40.150792  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.169796  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.281517  918963 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:20:40.285327  918963 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:20:40.285368  918963 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:20:40.285381  918963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/addons for local assets ...
	I1026 15:20:40.285436  918963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-713593/.minikube/files for local assets ...
	I1026 15:20:40.285517  918963 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem -> 7154402.pem in /etc/ssl/certs
	I1026 15:20:40.285635  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:20:40.293254  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:20:40.314726  918963 start.go:296] duration metric: took 164.10667ms for postStartSetup
	I1026 15:20:40.314879  918963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:20:40.314965  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.334347  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.434150  918963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:20:40.439110  918963 fix.go:56] duration metric: took 5.277822656s for fixHost
	I1026 15:20:40.439187  918963 start.go:83] releasing machines lock for "newest-cni-810872", held for 5.27792966s
	I1026 15:20:40.439290  918963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-810872
	I1026 15:20:40.457059  918963 ssh_runner.go:195] Run: cat /version.json
	I1026 15:20:40.457097  918963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:20:40.457113  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.457160  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:40.480637  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.487259  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:40.672813  918963 ssh_runner.go:195] Run: systemctl --version
	I1026 15:20:40.679861  918963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:20:40.724268  918963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:20:40.729370  918963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:20:40.729452  918963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:20:40.737962  918963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:20:40.737991  918963 start.go:495] detecting cgroup driver to use...
	I1026 15:20:40.738023  918963 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 15:20:40.738085  918963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:20:40.753124  918963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:20:40.766521  918963 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:20:40.766622  918963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:20:40.783072  918963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:20:40.796322  918963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:20:40.921668  918963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:20:41.044939  918963 docker.go:234] disabling docker service ...
	I1026 15:20:41.045066  918963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:20:41.061518  918963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:20:41.080891  918963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:20:41.221239  918963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:20:41.354910  918963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:20:41.368477  918963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:20:41.383031  918963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:20:41.383144  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.392981  918963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:20:41.393105  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.402728  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.413420  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.430310  918963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:20:41.439250  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.448879  918963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.457886  918963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:20:41.467199  918963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:20:41.475226  918963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:20:41.483106  918963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:41.617484  918963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:20:41.777979  918963 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:20:41.778049  918963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:20:41.782500  918963 start.go:563] Will wait 60s for crictl version
	I1026 15:20:41.782646  918963 ssh_runner.go:195] Run: which crictl
	I1026 15:20:41.786597  918963 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:20:41.810990  918963 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:20:41.811209  918963 ssh_runner.go:195] Run: crio --version
	I1026 15:20:41.841677  918963 ssh_runner.go:195] Run: crio --version
	I1026 15:20:41.892090  918963 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:20:41.895033  918963 cli_runner.go:164] Run: docker network inspect newest-cni-810872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:20:41.913182  918963 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:20:41.917095  918963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:20:41.929946  918963 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:20:41.932895  918963 kubeadm.go:883] updating cluster {Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:20:41.933046  918963 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:20:41.933151  918963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:20:41.967656  918963 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:20:41.967684  918963 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:20:41.967740  918963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:20:41.998615  918963 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:20:41.998643  918963 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:20:41.998650  918963 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 15:20:41.998748  918963 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-810872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:20:41.998842  918963 ssh_runner.go:195] Run: crio config
	I1026 15:20:42.073705  918963 cni.go:84] Creating CNI manager for ""
	I1026 15:20:42.073742  918963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:42.073771  918963 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:20:42.073797  918963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-810872 NodeName:newest-cni-810872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:20:42.073948  918963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-810872"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:20:42.074032  918963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:20:42.084550  918963 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:20:42.084646  918963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:20:42.094942  918963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 15:20:42.116341  918963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:20:42.140776  918963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1026 15:20:42.158422  918963 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:20:42.163459  918963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:20:42.177517  918963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:42.316525  918963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:20:42.337496  918963 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872 for IP: 192.168.85.2
	I1026 15:20:42.337587  918963 certs.go:195] generating shared ca certs ...
	I1026 15:20:42.337619  918963 certs.go:227] acquiring lock for ca certs: {Name:mk92448c09b1569d1cb5de3970c66a9788fa5fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:42.337818  918963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key
	I1026 15:20:42.337887  918963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key
	I1026 15:20:42.337909  918963 certs.go:257] generating profile certs ...
	I1026 15:20:42.338053  918963 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/client.key
	I1026 15:20:42.338167  918963 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key.4ba50940
	I1026 15:20:42.338262  918963 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key
	I1026 15:20:42.338416  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem (1338 bytes)
	W1026 15:20:42.338469  918963 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440_empty.pem, impossibly tiny 0 bytes
	I1026 15:20:42.338492  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:20:42.338564  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:20:42.338616  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:20:42.338673  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/certs/key.pem (1675 bytes)
	I1026 15:20:42.338770  918963 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem (1708 bytes)
	I1026 15:20:42.339402  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:20:42.368485  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 15:20:42.388324  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:20:42.409155  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1026 15:20:42.441639  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:20:42.460800  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:20:42.491509  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:20:42.517765  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/newest-cni-810872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:20:42.538370  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/certs/715440.pem --> /usr/share/ca-certificates/715440.pem (1338 bytes)
	I1026 15:20:42.560901  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/ssl/certs/7154402.pem --> /usr/share/ca-certificates/7154402.pem (1708 bytes)
	I1026 15:20:42.583257  918963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:20:42.604334  918963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:20:42.628896  918963 ssh_runner.go:195] Run: openssl version
	I1026 15:20:42.644505  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7154402.pem && ln -fs /usr/share/ca-certificates/7154402.pem /etc/ssl/certs/7154402.pem"
	I1026 15:20:42.654305  918963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7154402.pem
	I1026 15:20:42.659564  918963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:22 /usr/share/ca-certificates/7154402.pem
	I1026 15:20:42.659645  918963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7154402.pem
	I1026 15:20:42.707754  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7154402.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:20:42.716310  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:20:42.725333  918963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:42.729425  918963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:42.729541  918963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:20:42.772893  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:20:42.782012  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715440.pem && ln -fs /usr/share/ca-certificates/715440.pem /etc/ssl/certs/715440.pem"
	I1026 15:20:42.791031  918963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715440.pem
	I1026 15:20:42.795689  918963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:22 /usr/share/ca-certificates/715440.pem
	I1026 15:20:42.795797  918963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715440.pem
	I1026 15:20:42.837604  918963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715440.pem /etc/ssl/certs/51391683.0"
	I1026 15:20:42.846188  918963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:20:42.850830  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:20:42.893843  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:20:42.936160  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:20:42.978400  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:20:43.024991  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:20:43.071276  918963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:20:43.120536  918963 kubeadm.go:400] StartCluster: {Name:newest-cni-810872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-810872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:20:43.120736  918963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:20:43.120833  918963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:20:43.187827  918963 cri.go:89] found id: "3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593"
	I1026 15:20:43.187901  918963 cri.go:89] found id: ""
	I1026 15:20:43.188147  918963 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:20:43.218705  918963 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:20:43Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:20:43.218792  918963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:20:43.238786  918963 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:20:43.238856  918963 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:20:43.238948  918963 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:20:43.253154  918963 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:20:43.253896  918963 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-810872" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:43.254273  918963 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-713593/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-810872" cluster setting kubeconfig missing "newest-cni-810872" context setting]
	I1026 15:20:43.254878  918963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:43.256646  918963 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:20:43.290582  918963 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 15:20:43.290668  918963 kubeadm.go:601] duration metric: took 51.792921ms to restartPrimaryControlPlane
	I1026 15:20:43.290691  918963 kubeadm.go:402] duration metric: took 170.164632ms to StartCluster
	I1026 15:20:43.290735  918963 settings.go:142] acquiring lock: {Name:mk953771596c5d2e89654d746554c60ae4ecbff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:43.290832  918963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:43.292012  918963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/kubeconfig: {Name:mkaf5a999492296588af7af23a8b5cb694313a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:43.292347  918963 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:20:43.293026  918963 config.go:182] Loaded profile config "newest-cni-810872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:43.293175  918963 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:20:43.293270  918963 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-810872"
	I1026 15:20:43.293284  918963 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-810872"
	W1026 15:20:43.293291  918963 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:20:43.293315  918963 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:43.293865  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.294087  918963 addons.go:69] Setting dashboard=true in profile "newest-cni-810872"
	I1026 15:20:43.294119  918963 addons.go:238] Setting addon dashboard=true in "newest-cni-810872"
	W1026 15:20:43.294139  918963 addons.go:247] addon dashboard should already be in state true
	I1026 15:20:43.294190  918963 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:43.294670  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.298962  918963 addons.go:69] Setting default-storageclass=true in profile "newest-cni-810872"
	I1026 15:20:43.299365  918963 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-810872"
	I1026 15:20:43.300710  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.302414  918963 out.go:179] * Verifying Kubernetes components...
	I1026 15:20:43.306386  918963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:20:43.357333  918963 addons.go:238] Setting addon default-storageclass=true in "newest-cni-810872"
	W1026 15:20:43.357358  918963 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:20:43.357383  918963 host.go:66] Checking if "newest-cni-810872" exists ...
	I1026 15:20:43.357807  918963 cli_runner.go:164] Run: docker container inspect newest-cni-810872 --format={{.State.Status}}
	I1026 15:20:43.364784  918963 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:20:43.364870  918963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:20:43.367994  918963 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:43.368019  918963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:20:43.368089  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:43.371311  918963 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:20:43.374313  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:20:43.374339  918963 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:20:43.374417  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:43.408875  918963 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:43.408912  918963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:20:43.408984  918963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-810872
	I1026 15:20:43.439491  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:43.449057  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:43.460641  918963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/newest-cni-810872/id_rsa Username:docker}
	I1026 15:20:43.699680  918963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:20:43.732041  918963 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:20:43.732187  918963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:20:43.732648  918963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:20:43.762271  918963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:20:43.797164  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:20:43.797190  918963 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:20:43.801790  918963 api_server.go:72] duration metric: took 509.352121ms to wait for apiserver process to appear ...
	I1026 15:20:43.801819  918963 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:20:43.801838  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:43.802161  918963 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 15:20:43.834566  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:20:43.834592  918963 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:20:43.866118  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:20:43.866143  918963 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:20:43.943386  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:20:43.943409  918963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:20:44.022620  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:20:44.022647  918963 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:20:44.059187  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:20:44.059218  918963 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:20:44.086734  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:20:44.086802  918963 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:20:44.119973  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:20:44.120044  918963 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:20:44.141237  918963 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:20:44.141311  918963 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:20:44.163421  918963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:20:44.302743  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1026 15:20:43.114776  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:45.115214  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	W1026 15:20:47.115444  914177 pod_ready.go:104] pod "coredns-66bc5c9577-zm8vb" is not "Ready", error: <nil>
	I1026 15:20:47.616443  914177 pod_ready.go:94] pod "coredns-66bc5c9577-zm8vb" is "Ready"
	I1026 15:20:47.616510  914177 pod_ready.go:86] duration metric: took 36.007779577s for pod "coredns-66bc5c9577-zm8vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.619627  914177 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.626283  914177 pod_ready.go:94] pod "etcd-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:47.626364  914177 pod_ready.go:86] duration metric: took 6.655392ms for pod "etcd-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.629521  914177 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.640804  914177 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:47.640889  914177 pod_ready.go:86] duration metric: took 11.292956ms for pod "kube-apiserver-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.643685  914177 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:47.812158  914177 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:47.812239  914177 pod_ready.go:86] duration metric: took 168.478952ms for pod "kube-controller-manager-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:48.013331  914177 pod_ready.go:83] waiting for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:48.412730  914177 pod_ready.go:94] pod "kube-proxy-nbcd6" is "Ready"
	I1026 15:20:48.412806  914177 pod_ready.go:86] duration metric: took 399.388552ms for pod "kube-proxy-nbcd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:48.612713  914177 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:49.012887  914177 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-494684" is "Ready"
	I1026 15:20:49.012977  914177 pod_ready.go:86] duration metric: took 400.197127ms for pod "kube-scheduler-default-k8s-diff-port-494684" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:20:49.013005  914177 pod_ready.go:40] duration metric: took 37.416377925s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:20:49.098321  914177 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:20:49.101428  914177 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-494684" cluster and "default" namespace by default
	I1026 15:20:48.470198  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:20:48.470234  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:20:48.470249  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:48.648395  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:20:48.648421  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:20:48.802721  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:48.863367  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:48.863451  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:49.301948  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:49.335987  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:49.336022  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:49.802177  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:49.820487  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:20:49.820520  918963 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:20:50.145693  918963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.412991501s)
	I1026 15:20:50.145851  918963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.982332878s)
	I1026 15:20:50.146048  918963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.383403583s)
	I1026 15:20:50.149001  918963 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-810872 addons enable metrics-server
	
	I1026 15:20:50.176755  918963 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:20:50.179730  918963 addons.go:514] duration metric: took 6.886538561s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:20:50.302222  918963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:20:50.311560  918963 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:20:50.312896  918963 api_server.go:141] control plane version: v1.34.1
	I1026 15:20:50.312929  918963 api_server.go:131] duration metric: took 6.511103592s to wait for apiserver health ...
	I1026 15:20:50.312939  918963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:20:50.317004  918963 system_pods.go:59] 8 kube-system pods found
	I1026 15:20:50.317041  918963 system_pods.go:61] "coredns-66bc5c9577-b49d6" [0cc1ad2e-be8a-43fb-baed-3d411550f34c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:20:50.317050  918963 system_pods.go:61] "etcd-newest-cni-810872" [784475d8-6ee3-45c9-a0cc-55d18ee84177] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:20:50.317059  918963 system_pods.go:61] "kindnet-ggnvk" [52fc9b6a-4117-47b6-8fd4-eff144861784] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:20:50.317066  918963 system_pods.go:61] "kube-apiserver-newest-cni-810872" [cdd8bae8-4574-497b-a540-57831768a16b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:20:50.317073  918963 system_pods.go:61] "kube-controller-manager-newest-cni-810872" [96ea627b-92e4-448c-8621-2129603a8ce3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:20:50.317085  918963 system_pods.go:61] "kube-proxy-7rsbv" [d20c61cd-9231-44c6-9861-45cb1d45c060] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:20:50.317092  918963 system_pods.go:61] "kube-scheduler-newest-cni-810872" [17a3ef6c-201f-4fdb-b45f-6e3b2614a3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:20:50.317104  918963 system_pods.go:61] "storage-provisioner" [6a816eb1-59c8-4ed0-9087-4fb271f4608b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:20:50.317111  918963 system_pods.go:74] duration metric: took 4.139319ms to wait for pod list to return data ...
	I1026 15:20:50.317121  918963 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:20:50.320152  918963 default_sa.go:45] found service account: "default"
	I1026 15:20:50.320179  918963 default_sa.go:55] duration metric: took 3.051635ms for default service account to be created ...
	I1026 15:20:50.320191  918963 kubeadm.go:586] duration metric: took 7.027758275s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:20:50.320208  918963 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:20:50.322729  918963 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1026 15:20:50.322763  918963 node_conditions.go:123] node cpu capacity is 2
	I1026 15:20:50.322776  918963 node_conditions.go:105] duration metric: took 2.561563ms to run NodePressure ...
	I1026 15:20:50.322788  918963 start.go:241] waiting for startup goroutines ...
	I1026 15:20:50.322796  918963 start.go:246] waiting for cluster config update ...
	I1026 15:20:50.322811  918963 start.go:255] writing updated cluster config ...
	I1026 15:20:50.323113  918963 ssh_runner.go:195] Run: rm -f paused
	I1026 15:20:50.429104  918963 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1026 15:20:50.432795  918963 out.go:179] * Done! kubectl is now configured to use "newest-cni-810872" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.339722633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.348506795Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=efae6dd2-a857-4977-bf33-19436ede827a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.35665223Z" level=info msg="Ran pod sandbox 4745de2c92ffa3b1745379d9a9bc0cf40b932f74be71ee063971dc2cb00248fd with infra container: kube-system/kube-proxy-7rsbv/POD" id=efae6dd2-a857-4977-bf33-19436ede827a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.358870978Z" level=info msg="Running pod sandbox: kube-system/kindnet-ggnvk/POD" id=f39166a8-a274-48c2-885f-5d890f0a7305 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.358940854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.362335763Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f39166a8-a274-48c2-885f-5d890f0a7305 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.368026361Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b9a47c40-6d05-4a0b-bed1-f116f1200693 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.371104212Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=33d3c164-1477-42d5-b965-e21c15b29636 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.374168155Z" level=info msg="Creating container: kube-system/kube-proxy-7rsbv/kube-proxy" id=f098c8e8-c8fc-411c-ae8e-339dda211188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.3742966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.37746255Z" level=info msg="Ran pod sandbox 66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38 with infra container: kube-system/kindnet-ggnvk/POD" id=f39166a8-a274-48c2-885f-5d890f0a7305 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.382954221Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=af656654-2125-4caf-8346-6f4ea8fa4c99 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.388079052Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0f371136-1a36-446d-88cc-5eff879c6b92 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.389912787Z" level=info msg="Creating container: kube-system/kindnet-ggnvk/kindnet-cni" id=290ebf79-dbed-497f-a74c-7c38e961bea3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.390039033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.39166822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.394396496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.395320387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.396549069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.453729323Z" level=info msg="Created container 088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3: kube-system/kube-proxy-7rsbv/kube-proxy" id=f098c8e8-c8fc-411c-ae8e-339dda211188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.456508184Z" level=info msg="Created container 9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5: kube-system/kindnet-ggnvk/kindnet-cni" id=290ebf79-dbed-497f-a74c-7c38e961bea3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.475453048Z" level=info msg="Starting container: 9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5" id=9cabdf3c-5922-4b90-82b2-3791c1cb6914 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.478425183Z" level=info msg="Starting container: 088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3" id=1d5dc104-7d02-4f63-8419-4ceaffb8208b name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.486061747Z" level=info msg="Started container" PID=1061 containerID=9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5 description=kube-system/kindnet-ggnvk/kindnet-cni id=9cabdf3c-5922-4b90-82b2-3791c1cb6914 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38
	Oct 26 15:20:50 newest-cni-810872 crio[609]: time="2025-10-26T15:20:50.493013799Z" level=info msg="Started container" PID=1060 containerID=088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3 description=kube-system/kube-proxy-7rsbv/kube-proxy id=1d5dc104-7d02-4f63-8419-4ceaffb8208b name=/runtime.v1.RuntimeService/StartContainer sandboxID=4745de2c92ffa3b1745379d9a9bc0cf40b932f74be71ee063971dc2cb00248fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9f5cdb4d4577f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   66b54d3edb981       kindnet-ggnvk                               kube-system
	088e65f93c8fc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   4745de2c92ffa       kube-proxy-7rsbv                            kube-system
	c3c8354421959       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   ccb1cd67f0a05       etcd-newest-cni-810872                      kube-system
	3f0eac97cebef       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   5726231d02b21       kube-controller-manager-newest-cni-810872   kube-system
	629f275ce664c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   32b83e56aea85       kube-apiserver-newest-cni-810872            kube-system
	4b83b18a31554       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   95694fdd3371f       kube-scheduler-newest-cni-810872            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-810872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-810872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=newest-cni-810872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_20_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:20:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-810872
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 15:20:48 +0000   Sun, 26 Oct 2025 15:20:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-810872
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cb876d54-b19f-49ca-b5c7-700f084fb6f3
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-810872                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-ggnvk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-810872             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-810872    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-7rsbv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-810872             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-810872 event: Registered Node newest-cni-810872 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-810872 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-810872 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-810872 event: Registered Node newest-cni-810872 in Controller
	
	
	==> dmesg <==
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	[Oct26 15:20] overlayfs: idmapped layers are currently not supported
	[  +9.178014] overlayfs: idmapped layers are currently not supported
	[ +33.140474] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c3c835442195947feaa5c9643bf06f25c54f4301cb28669c53826faac0cd7145] <==
	{"level":"warn","ts":"2025-10-26T15:20:47.112605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.133845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.160121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.177165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.209973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.217116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.235270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.282573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.294176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.310426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.332953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.347123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.381873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.425512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.440785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.457558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.483175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.494296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.516852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.530641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.552224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.572501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.600867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.629931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:47.726076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:20:56 up  5:03,  0 user,  load average: 4.17, 3.86, 3.24
	Linux newest-cni-810872 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f5cdb4d4577f451972e1470e0f15f104ddff55552c9299dad33e2f6eb1e63c5] <==
	I1026 15:20:50.630824       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:20:50.631373       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:20:50.631530       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:20:50.631572       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:20:50.631612       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:20:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:20:50.850499       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:20:50.857300       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:20:50.857410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:20:50.857972       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [629f275ce664cc35af8b347d8b11bb813d2dc6e37a24629561382ad36edfce32] <==
	I1026 15:20:48.844878       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:20:48.844952       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:20:48.850038       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:20:48.852839       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:20:48.853510       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:20:48.853591       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:20:48.866215       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:20:48.866326       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:20:48.866477       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:20:48.866492       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:20:48.866499       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:20:48.866505       1 cache.go:39] Caches are synced for autoregister controller
	E1026 15:20:48.929508       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:20:49.370727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:20:49.598373       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:20:49.750223       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:20:49.797502       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:20:49.826607       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:20:49.858590       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:20:49.955720       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.228.232"}
	I1026 15:20:49.975836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.223.157"}
	I1026 15:20:52.129503       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:20:52.434663       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:20:52.579591       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:52.630811       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3f0eac97cebef7ddd856aff1f6018540cceb41ed2fdde98ef1034f198c6fa593] <==
	I1026 15:20:52.028938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:52.029464       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:20:52.030902       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:20:52.032150       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:20:52.036053       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:52.039227       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:20:52.042448       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:20:52.050047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:20:52.057398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:52.062545       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:20:52.066948       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:20:52.070447       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:20:52.072831       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:20:52.073074       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:20:52.073536       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:20:52.073699       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:20:52.076173       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:20:52.079761       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:52.079792       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:20:52.079800       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:20:52.085385       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:20:52.085483       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:20:52.085571       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-810872"
	I1026 15:20:52.085621       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:20:52.090238       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	
	
	==> kube-proxy [088e65f93c8fc255e6f63128c5b50a802f71f0e8c9b6d3e6c529b310d54936a3] <==
	I1026 15:20:50.590943       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:20:50.804274       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:20:50.908945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:20:50.909066       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:20:50.909196       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:20:50.984038       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:20:50.984099       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:20:50.989776       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:20:50.990254       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:20:50.990281       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:50.992498       1 config.go:200] "Starting service config controller"
	I1026 15:20:50.992515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:20:50.992533       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:20:50.992537       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:20:50.992548       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:20:50.992552       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:20:50.998433       1 config.go:309] "Starting node config controller"
	I1026 15:20:50.998461       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:20:50.998469       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:20:51.092943       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:20:51.092983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:20:51.093030       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b83b18a315545aca9f139c5b37a51f23e22004c6a5ceae83fddaa2f4eaa4492] <==
	I1026 15:20:46.004732       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:20:48.496105       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:20:48.496140       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:20:48.496151       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:20:48.496167       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:20:48.844558       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:20:48.844592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:48.856384       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:20:48.856476       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:48.856495       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:48.856510       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:20:48.957878       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.715446     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-xtables-lock\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.715465     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-cni-cfg\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.715482     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52fc9b6a-4117-47b6-8fd4-eff144861784-lib-modules\") pod \"kindnet-ggnvk\" (UID: \"52fc9b6a-4117-47b6-8fd4-eff144861784\") " pod="kube-system/kindnet-ggnvk"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.757521     724 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-newest-cni-810872\" is forbidden: User \"system:node:newest-cni-810872\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-810872' and this object" podUID="570a64448d1a0176d967fb314f521fba" pod="kube-system/kube-controller-manager-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.788919     724 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-newest-cni-810872\" is forbidden: User \"system:node:newest-cni-810872\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-810872' and this object" podUID="6570284b6b81aecad4d0356ab9d5ec89" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.910095     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-810872\" already exists" pod="kube-system/kube-controller-manager-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.910140     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.956114     724 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.956230     724 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.956274     724 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.957345     724 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: E1026 15:20:48.957594     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-810872\" already exists" pod="kube-system/kube-scheduler-newest-cni-810872"
	Oct 26 15:20:48 newest-cni-810872 kubelet[724]: I1026 15:20:48.957739     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.008491     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-810872\" already exists" pod="kube-system/etcd-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: I1026 15:20:49.008556     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.102330     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-810872\" already exists" pod="kube-system/kube-apiserver-newest-cni-810872"
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.560782     724 projected.go:196] Error preparing data for projected volume kube-api-access-mf5qw for pod kube-system/kindnet-ggnvk: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.560924     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52fc9b6a-4117-47b6-8fd4-eff144861784-kube-api-access-mf5qw podName:52fc9b6a-4117-47b6-8fd4-eff144861784 nodeName:}" failed. No retries permitted until 2025-10-26 15:20:50.060886294 +0000 UTC m=+7.723457779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mf5qw" (UniqueName: "kubernetes.io/projected/52fc9b6a-4117-47b6-8fd4-eff144861784-kube-api-access-mf5qw") pod "kindnet-ggnvk" (UID: "52fc9b6a-4117-47b6-8fd4-eff144861784") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.560988     724 projected.go:196] Error preparing data for projected volume kube-api-access-s7khq for pod kube-system/kube-proxy-7rsbv: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:49 newest-cni-810872 kubelet[724]: E1026 15:20:49.561046     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d20c61cd-9231-44c6-9861-45cb1d45c060-kube-api-access-s7khq podName:d20c61cd-9231-44c6-9861-45cb1d45c060 nodeName:}" failed. No retries permitted until 2025-10-26 15:20:50.061032478 +0000 UTC m=+7.723603865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s7khq" (UniqueName: "kubernetes.io/projected/d20c61cd-9231-44c6-9861-45cb1d45c060-kube-api-access-s7khq") pod "kube-proxy-7rsbv" (UID: "d20c61cd-9231-44c6-9861-45cb1d45c060") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-810872" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-810872' and this object
	Oct 26 15:20:50 newest-cni-810872 kubelet[724]: I1026 15:20:50.165642     724 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 26 15:20:50 newest-cni-810872 kubelet[724]: W1026 15:20:50.372037     724 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fcebd01730016a6946708cc9bb5153470daacdda1609b0fac42f586e8b00e4c1/crio-66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38 WatchSource:0}: Error finding container 66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38: Status 404 returned error can't find the container with id 66b54d3edb981e8a9c2987f61125d46990ed2054087b537382421f426ec51d38
	Oct 26 15:20:51 newest-cni-810872 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:20:51 newest-cni-810872 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:20:51 newest-cni-810872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-810872 -n newest-cni-810872
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-810872 -n newest-cni-810872: exit status 2 (377.957432ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-810872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb: exit status 1 (109.861881ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-b49d6" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hxld6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sbzbb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-810872 describe pod coredns-66bc5c9577-b49d6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hxld6 kubernetes-dashboard-855c9754f9-sbzbb: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-494684 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-494684 --alsologtostderr -v=1: exit status 80 (2.320929596s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-494684 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:21:01.297404  922502 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:21:01.297758  922502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:21:01.297784  922502 out.go:374] Setting ErrFile to fd 2...
	I1026 15:21:01.297820  922502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:21:01.298323  922502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:21:01.301932  922502 out.go:368] Setting JSON to false
	I1026 15:21:01.301962  922502 mustload.go:65] Loading cluster: default-k8s-diff-port-494684
	I1026 15:21:01.302368  922502 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:21:01.302828  922502 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-494684 --format={{.State.Status}}
	I1026 15:21:01.330341  922502 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:21:01.330686  922502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:21:01.445597  922502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-26 15:21:01.429430649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:21:01.446266  922502 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-494684 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:21:01.449892  922502 out.go:179] * Pausing node default-k8s-diff-port-494684 ... 
	I1026 15:21:01.452963  922502 host.go:66] Checking if "default-k8s-diff-port-494684" exists ...
	I1026 15:21:01.453323  922502 ssh_runner.go:195] Run: systemctl --version
	I1026 15:21:01.453372  922502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-494684
	I1026 15:21:01.477224  922502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/default-k8s-diff-port-494684/id_rsa Username:docker}
	I1026 15:21:01.583764  922502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:21:01.603550  922502 pause.go:52] kubelet running: true
	I1026 15:21:01.603622  922502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:21:01.920972  922502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:21:01.921073  922502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:21:02.002981  922502 cri.go:89] found id: "424850f4f7a96508483b33e142e2921faab21e2cccc2ce09d8328764c50179f0"
	I1026 15:21:02.003006  922502 cri.go:89] found id: "9f16314e48b8fd0624cda906bc9d32caef8c5a24e782e7bfe524002f61e3eab3"
	I1026 15:21:02.003011  922502 cri.go:89] found id: "34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c"
	I1026 15:21:02.003014  922502 cri.go:89] found id: "013ec6f98c0140858003af5f3659553f790b05e213708d5857f92ea159423b1a"
	I1026 15:21:02.003017  922502 cri.go:89] found id: "a0180faaf0f1bee4ee7d363cbda8c2925f3a5fa8d74fe22adef91512ea23fb5a"
	I1026 15:21:02.003026  922502 cri.go:89] found id: "241c767113e68c1f22448bdbebeb0a4e52ed25a88c70b543c9b9d67191107fe6"
	I1026 15:21:02.003037  922502 cri.go:89] found id: "7f98f8d7b370c0262b7b8305334add4092bc7bb084d8f736c2dfb8914762723b"
	I1026 15:21:02.003041  922502 cri.go:89] found id: "726d76ef979662bc62bda3f5d764d66efbaf72659b362834d790c61451facabd"
	I1026 15:21:02.003044  922502 cri.go:89] found id: "76f8254b92018f8ae8e793d8373b480a5d5fd6589077c7f793456dfa1a8a71cc"
	I1026 15:21:02.003050  922502 cri.go:89] found id: "71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	I1026 15:21:02.003058  922502 cri.go:89] found id: "b36b45e3ea3a24ee46032be9e1b20ead00a7e68c7a9149c026a05817148a912a"
	I1026 15:21:02.003061  922502 cri.go:89] found id: ""
	I1026 15:21:02.003128  922502 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:21:02.017828  922502 retry.go:31] will retry after 324.53113ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:21:02Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:21:02.343415  922502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:21:02.357488  922502 pause.go:52] kubelet running: false
	I1026 15:21:02.357564  922502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:21:02.576157  922502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:21:02.576263  922502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:21:02.654803  922502 cri.go:89] found id: "424850f4f7a96508483b33e142e2921faab21e2cccc2ce09d8328764c50179f0"
	I1026 15:21:02.654830  922502 cri.go:89] found id: "9f16314e48b8fd0624cda906bc9d32caef8c5a24e782e7bfe524002f61e3eab3"
	I1026 15:21:02.654835  922502 cri.go:89] found id: "34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c"
	I1026 15:21:02.654839  922502 cri.go:89] found id: "013ec6f98c0140858003af5f3659553f790b05e213708d5857f92ea159423b1a"
	I1026 15:21:02.654843  922502 cri.go:89] found id: "a0180faaf0f1bee4ee7d363cbda8c2925f3a5fa8d74fe22adef91512ea23fb5a"
	I1026 15:21:02.654846  922502 cri.go:89] found id: "241c767113e68c1f22448bdbebeb0a4e52ed25a88c70b543c9b9d67191107fe6"
	I1026 15:21:02.654850  922502 cri.go:89] found id: "7f98f8d7b370c0262b7b8305334add4092bc7bb084d8f736c2dfb8914762723b"
	I1026 15:21:02.654853  922502 cri.go:89] found id: "726d76ef979662bc62bda3f5d764d66efbaf72659b362834d790c61451facabd"
	I1026 15:21:02.654857  922502 cri.go:89] found id: "76f8254b92018f8ae8e793d8373b480a5d5fd6589077c7f793456dfa1a8a71cc"
	I1026 15:21:02.654868  922502 cri.go:89] found id: "71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	I1026 15:21:02.654883  922502 cri.go:89] found id: "b36b45e3ea3a24ee46032be9e1b20ead00a7e68c7a9149c026a05817148a912a"
	I1026 15:21:02.654891  922502 cri.go:89] found id: ""
	I1026 15:21:02.654940  922502 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:21:02.667428  922502 retry.go:31] will retry after 423.713641ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:21:02Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:21:03.092227  922502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:21:03.107271  922502 pause.go:52] kubelet running: false
	I1026 15:21:03.107362  922502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:21:03.353550  922502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:21:03.353643  922502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:21:03.467452  922502 cri.go:89] found id: "424850f4f7a96508483b33e142e2921faab21e2cccc2ce09d8328764c50179f0"
	I1026 15:21:03.467480  922502 cri.go:89] found id: "9f16314e48b8fd0624cda906bc9d32caef8c5a24e782e7bfe524002f61e3eab3"
	I1026 15:21:03.467485  922502 cri.go:89] found id: "34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c"
	I1026 15:21:03.467489  922502 cri.go:89] found id: "013ec6f98c0140858003af5f3659553f790b05e213708d5857f92ea159423b1a"
	I1026 15:21:03.467492  922502 cri.go:89] found id: "a0180faaf0f1bee4ee7d363cbda8c2925f3a5fa8d74fe22adef91512ea23fb5a"
	I1026 15:21:03.467496  922502 cri.go:89] found id: "241c767113e68c1f22448bdbebeb0a4e52ed25a88c70b543c9b9d67191107fe6"
	I1026 15:21:03.467499  922502 cri.go:89] found id: "7f98f8d7b370c0262b7b8305334add4092bc7bb084d8f736c2dfb8914762723b"
	I1026 15:21:03.467503  922502 cri.go:89] found id: "726d76ef979662bc62bda3f5d764d66efbaf72659b362834d790c61451facabd"
	I1026 15:21:03.467506  922502 cri.go:89] found id: "76f8254b92018f8ae8e793d8373b480a5d5fd6589077c7f793456dfa1a8a71cc"
	I1026 15:21:03.467513  922502 cri.go:89] found id: "71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	I1026 15:21:03.467516  922502 cri.go:89] found id: "b36b45e3ea3a24ee46032be9e1b20ead00a7e68c7a9149c026a05817148a912a"
	I1026 15:21:03.467527  922502 cri.go:89] found id: ""
	I1026 15:21:03.467587  922502 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:21:03.494211  922502 out.go:203] 
	W1026 15:21:03.498586  922502 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:21:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:21:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:21:03.498614  922502 out.go:285] * 
	* 
	W1026 15:21:03.509888  922502 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:21:03.513865  922502 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-494684 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-494684
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-494684:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5",
	        "Created": "2025-10-26T15:18:07.847117574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 914658,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:19:51.679630293Z",
	            "FinishedAt": "2025-10-26T15:19:50.173508243Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/hosts",
	        "LogPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5-json.log",
	        "Name": "/default-k8s-diff-port-494684",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-494684:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-494684",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5",
	                "LowerDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-494684",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-494684/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-494684",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-494684",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-494684",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c906b02e8fd8bfa9eacbcae7e26dbef4030a4ac62750eff8dda539312a408e1",
	            "SandboxKey": "/var/run/docker/netns/8c906b02e8fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-494684": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:b8:44:e1:d3:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3a8cf1602f3f72d6a70a2be8fdd96fd095eb34b48ad075b2aa41a3d8b9118a52",
	                    "EndpointID": "6dedba145c61eb600863158e3bfbd6ebf46c1ab179b3fe6c6d1c272ad52fdf72",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-494684",
	                        "ff68c01604a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684: exit status 2 (450.229542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-494684 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-494684 logs -n 25: (2.171503111s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                                                                                                    │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-494684 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-494684 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ stop    │ -p newest-cni-810872 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-810872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ image   │ newest-cni-810872 image list --format=json                                                                                                                                                                                                    │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p newest-cni-810872 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ delete  │ -p newest-cni-810872                                                                                                                                                                                                                          │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ delete  │ -p newest-cni-810872                                                                                                                                                                                                                          │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p auto-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-337407                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ image   │ default-k8s-diff-port-494684 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ pause   │ -p default-k8s-diff-port-494684 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:20:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:20:59.750189  922177 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:20:59.750340  922177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:59.750372  922177 out.go:374] Setting ErrFile to fd 2...
	I1026 15:20:59.750384  922177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:59.751220  922177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:20:59.751793  922177 out.go:368] Setting JSON to false
	I1026 15:20:59.752987  922177 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18212,"bootTime":1761473848,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:20:59.753090  922177 start.go:141] virtualization:  
	I1026 15:20:59.756796  922177 out.go:179] * [auto-337407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:20:59.760666  922177 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:20:59.760772  922177 notify.go:220] Checking for updates...
	I1026 15:20:59.766772  922177 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:20:59.769787  922177 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:59.772846  922177 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:20:59.775792  922177 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:20:59.778680  922177 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:20:59.782229  922177 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:59.782380  922177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:20:59.808378  922177 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:20:59.808504  922177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:59.870149  922177 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:59.861144199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:59.870257  922177 docker.go:318] overlay module found
	I1026 15:20:59.873355  922177 out.go:179] * Using the docker driver based on user configuration
	I1026 15:20:59.876262  922177 start.go:305] selected driver: docker
	I1026 15:20:59.876278  922177 start.go:925] validating driver "docker" against <nil>
	I1026 15:20:59.876291  922177 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:20:59.877078  922177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:59.935162  922177 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:59.923936674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:59.935319  922177 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:20:59.935546  922177 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:20:59.938548  922177 out.go:179] * Using Docker driver with root privileges
	I1026 15:20:59.941477  922177 cni.go:84] Creating CNI manager for ""
	I1026 15:20:59.941551  922177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:59.941563  922177 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:20:59.941643  922177 start.go:349] cluster config:
	{Name:auto-337407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-337407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 15:20:59.944650  922177 out.go:179] * Starting "auto-337407" primary control-plane node in "auto-337407" cluster
	I1026 15:20:59.947505  922177 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:20:59.950399  922177 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:20:59.953144  922177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:20:59.953196  922177 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:20:59.953208  922177 cache.go:58] Caching tarball of preloaded images
	I1026 15:20:59.953247  922177 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:20:59.953306  922177 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:20:59.953319  922177 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:20:59.953426  922177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/config.json ...
	I1026 15:20:59.953447  922177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/config.json: {Name:mk7d5e86c8305b18c8b686019494c5cba52ee218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:59.973544  922177 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:20:59.973568  922177 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:20:59.973581  922177 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:20:59.973604  922177 start.go:360] acquireMachinesLock for auto-337407: {Name:mke4a53cda5bf2983bbbbd2fb9f51db15123b513 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:20:59.973710  922177 start.go:364] duration metric: took 84.867µs to acquireMachinesLock for "auto-337407"
	I1026 15:20:59.973741  922177 start.go:93] Provisioning new machine with config: &{Name:auto-337407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-337407 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:20:59.973815  922177 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.20876786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.217396861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.222114772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.246884274Z" level=info msg="Created container 71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs/dashboard-metrics-scraper" id=ac3dbfbb-ed2b-4119-ba7b-a7b4c94a34b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.248123221Z" level=info msg="Starting container: 71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796" id=adfc98f6-f5d5-4c21-bfeb-62f1f74e5acf name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.252252013Z" level=info msg="Started container" PID=1653 containerID=71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs/dashboard-metrics-scraper id=adfc98f6-f5d5-4c21-bfeb-62f1f74e5acf name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f8316ac436250acd2dc18f7b7a01d77164123cd4a1abd977ebf0dc5dafd34c4
	Oct 26 15:20:46 default-k8s-diff-port-494684 conmon[1651]: conmon 71f2cf630e8f015c4901 <ninfo>: container 1653 exited with status 1
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.646479549Z" level=info msg="Removing container: ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260" id=04e2182b-fc13-4975-b9e3-215e6221ff46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.655004918Z" level=info msg="Error loading conmon cgroup of container ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260: cgroup deleted" id=04e2182b-fc13-4975-b9e3-215e6221ff46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.661924813Z" level=info msg="Removed container ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs/dashboard-metrics-scraper" id=04e2182b-fc13-4975-b9e3-215e6221ff46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.036892698Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.041339401Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.041408087Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.041432046Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.049812191Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.04985156Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.049873706Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.054586619Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.054633044Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.054655018Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.059525799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.059571527Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.059596734Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.063753062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.063806888Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	71f2cf630e8f0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   7f8316ac43625       dashboard-metrics-scraper-6ffb444bf9-nkdqs             kubernetes-dashboard
	424850f4f7a96       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   0e5e3404b749d       storage-provisioner                                    kube-system
	b36b45e3ea3a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   646e45a350669       kubernetes-dashboard-855c9754f9-f9ct2                  kubernetes-dashboard
	9f16314e48b8f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   33faa6d0d72ca       kindnet-bfc62                                          kube-system
	34203d861db1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   0e5e3404b749d       storage-provisioner                                    kube-system
	013ec6f98c014       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   a53afe307bc93       coredns-66bc5c9577-zm8vb                               kube-system
	d17b0acafce8a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   110cb0e10512f       busybox                                                default
	a0180faaf0f1b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   dffd2b39d2c6d       kube-proxy-nbcd6                                       kube-system
	241c767113e68       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   8d61c1e93f0a7       kube-apiserver-default-k8s-diff-port-494684            kube-system
	7f98f8d7b370c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1cfcf12e504bf       etcd-default-k8s-diff-port-494684                      kube-system
	726d76ef97966       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   55fdd4aa541a4       kube-scheduler-default-k8s-diff-port-494684            kube-system
	76f8254b92018       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   4b7a8f8e76197       kube-controller-manager-default-k8s-diff-port-494684   kube-system
	
	
	==> coredns [013ec6f98c0140858003af5f3659553f790b05e213708d5857f92ea159423b1a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59170 - 54697 "HINFO IN 8430478019251975376.1677785563822640210. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012579987s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-494684
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-494684
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-494684
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-494684
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-494684
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a6e20c02-f12b-4169-8ea1-8297398ff607
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-zm8vb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-494684                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-bfc62                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-494684             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-494684    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-nbcd6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-494684             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nkdqs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f9ct2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-494684 event: Registered Node default-k8s-diff-port-494684 in Controller
	  Normal   NodeReady                103s                   kubelet          Node default-k8s-diff-port-494684 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-494684 event: Registered Node default-k8s-diff-port-494684 in Controller
	
	
	==> dmesg <==
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	[Oct26 15:20] overlayfs: idmapped layers are currently not supported
	[  +9.178014] overlayfs: idmapped layers are currently not supported
	[ +33.140474] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f98f8d7b370c0262b7b8305334add4092bc7bb084d8f736c2dfb8914762723b] <==
	{"level":"warn","ts":"2025-10-26T15:20:04.898097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:04.957929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:04.997625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.058708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.108781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.146193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.226573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.273662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.305674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.344855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.389605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.429561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.496822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.529418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.558924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.613519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.659039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.693993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.746983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.785316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.850925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.908889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.932042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.996863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:06.204403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:21:05 up  5:03,  0 user,  load average: 3.91, 3.82, 3.23
	Linux default-k8s-diff-port-494684 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f16314e48b8fd0624cda906bc9d32caef8c5a24e782e7bfe524002f61e3eab3] <==
	I1026 15:20:10.798506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:20:10.868026       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:20:10.868177       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:20:10.868190       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:20:10.868205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:20:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:20:11.036188       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:20:11.037324       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:20:11.037408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:20:11.038287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:20:41.036640       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:20:41.038021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:20:41.038141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:20:41.038235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:20:42.438475       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:20:42.438603       1 metrics.go:72] Registering metrics
	I1026 15:20:42.438695       1 controller.go:711] "Syncing nftables rules"
	I1026 15:20:51.036370       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:20:51.036525       1 main.go:301] handling current node
	I1026 15:21:01.040810       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:21:01.040847       1 main.go:301] handling current node
	
	
	==> kube-apiserver [241c767113e68c1f22448bdbebeb0a4e52ed25a88c70b543c9b9d67191107fe6] <==
	I1026 15:20:08.074880       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:20:08.095694       1 policy_source.go:240] refreshing policies
	I1026 15:20:08.106459       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:20:08.109266       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:20:08.120899       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:20:08.129252       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:20:08.132547       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:20:08.156999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:20:08.169786       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:20:08.169951       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:20:08.169963       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:20:08.170071       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:20:08.234412       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1026 15:20:08.435568       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:20:08.759416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:20:09.658734       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:20:09.827614       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:20:09.984201       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:20:10.119977       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:20:10.789247       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.133.4"}
	I1026 15:20:10.958439       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.73.82"}
	I1026 15:20:13.584086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:13.584298       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:13.880045       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:20:13.932061       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [76f8254b92018f8ae8e793d8373b480a5d5fd6589077c7f793456dfa1a8a71cc] <==
	I1026 15:20:13.451722       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:20:13.453997       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:20:13.454107       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:20:13.454168       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:20:13.454214       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:20:13.454243       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:20:13.454398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:13.457754       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:20:13.462287       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:20:13.466741       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:20:13.468440       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:20:13.472550       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:20:13.472772       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:20:13.473903       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:20:13.473956       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:20:13.477118       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:20:13.477222       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:20:13.480854       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:20:13.482003       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:20:13.486222       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:20:13.486584       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:20:13.492836       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:13.511290       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:13.511374       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:20:13.511404       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a0180faaf0f1bee4ee7d363cbda8c2925f3a5fa8d74fe22adef91512ea23fb5a] <==
	I1026 15:20:11.745963       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:20:11.862796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:20:11.988761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:20:12.041837       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:20:12.043377       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:20:12.800937       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:20:12.801062       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:20:12.903188       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:20:12.903520       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:20:12.903533       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:12.904597       1 config.go:200] "Starting service config controller"
	I1026 15:20:12.904609       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:20:12.905878       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:20:12.905891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:20:12.905925       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:20:12.905930       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:20:12.906573       1 config.go:309] "Starting node config controller"
	I1026 15:20:12.906580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:20:12.906586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:20:13.010748       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:20:13.010794       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:20:13.050436       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [726d76ef979662bc62bda3f5d764d66efbaf72659b362834d790c61451facabd] <==
	I1026 15:20:11.665029       1 serving.go:386] Generated self-signed cert in-memory
	I1026 15:20:12.968832       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:20:12.971425       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:12.981700       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:20:12.981949       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 15:20:12.982079       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 15:20:12.983238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:12.991154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:20:13.020831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:20:13.053659       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:20:13.050551       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:13.054118       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:13.082210       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 15:20:13.154774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.215950     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9672d\" (UniqueName: \"kubernetes.io/projected/27f1fe6d-9160-4237-810e-cd2e3879314c-kube-api-access-9672d\") pod \"dashboard-metrics-scraper-6ffb444bf9-nkdqs\" (UID: \"27f1fe6d-9160-4237-810e-cd2e3879314c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.216026     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2313a016-1717-46d4-b96a-c1690b8d1d77-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-f9ct2\" (UID: \"2313a016-1717-46d4-b96a-c1690b8d1d77\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f9ct2"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.216046     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d856j\" (UniqueName: \"kubernetes.io/projected/2313a016-1717-46d4-b96a-c1690b8d1d77-kube-api-access-d856j\") pod \"kubernetes-dashboard-855c9754f9-f9ct2\" (UID: \"2313a016-1717-46d4-b96a-c1690b8d1d77\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f9ct2"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.216063     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/27f1fe6d-9160-4237-810e-cd2e3879314c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nkdqs\" (UID: \"27f1fe6d-9160-4237-810e-cd2e3879314c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: W1026 15:20:14.402763     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/crio-646e45a350669a2b63aca826b6a73acecd0155ce3b3bcbc761d0db9178788421 WatchSource:0}: Error finding container 646e45a350669a2b63aca826b6a73acecd0155ce3b3bcbc761d0db9178788421: Status 404 returned error can't find the container with id 646e45a350669a2b63aca826b6a73acecd0155ce3b3bcbc761d0db9178788421
	Oct 26 15:20:17 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:17.338937     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:20:22 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:22.595764     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f9ct2" podStartSLOduration=2.30950248 podStartE2EDuration="9.595742018s" podCreationTimestamp="2025-10-26 15:20:13 +0000 UTC" firstStartedPulling="2025-10-26 15:20:14.413847334 +0000 UTC m=+14.564711698" lastFinishedPulling="2025-10-26 15:20:21.700086863 +0000 UTC m=+21.850951236" observedRunningTime="2025-10-26 15:20:22.581174875 +0000 UTC m=+22.732039290" watchObservedRunningTime="2025-10-26 15:20:22.595742018 +0000 UTC m=+22.746606383"
	Oct 26 15:20:27 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:27.576824     777 scope.go:117] "RemoveContainer" containerID="652735924df9826b160b97d04ae2c3a278a5d98999a9371a73deebcddde0f704"
	Oct 26 15:20:28 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:28.582795     777 scope.go:117] "RemoveContainer" containerID="652735924df9826b160b97d04ae2c3a278a5d98999a9371a73deebcddde0f704"
	Oct 26 15:20:28 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:28.583151     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:28 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:28.583326     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:29 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:29.587452     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:29 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:29.593056     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:34 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:34.428053     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:34 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:34.428250     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:41 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:41.621085     777 scope.go:117] "RemoveContainer" containerID="34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:46.203921     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:46.635990     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:46.636507     777 scope.go:117] "RemoveContainer" containerID="71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:46.636852     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:54 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:54.427080     777 scope.go:117] "RemoveContainer" containerID="71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	Oct 26 15:20:54 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:54.427313     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:21:01 default-k8s-diff-port-494684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:21:01 default-k8s-diff-port-494684 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:21:01 default-k8s-diff-port-494684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b36b45e3ea3a24ee46032be9e1b20ead00a7e68c7a9149c026a05817148a912a] <==
	2025/10/26 15:20:21 Using namespace: kubernetes-dashboard
	2025/10/26 15:20:21 Using in-cluster config to connect to apiserver
	2025/10/26 15:20:21 Using secret token for csrf signing
	2025/10/26 15:20:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:20:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:20:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:20:21 Generating JWE encryption key
	2025/10/26 15:20:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:20:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:20:21 Initializing JWE encryption key from synchronized object
	2025/10/26 15:20:21 Creating in-cluster Sidecar client
	2025/10/26 15:20:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:20:21 Serving insecurely on HTTP port: 9090
	2025/10/26 15:20:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:20:21 Starting overwatch
	
	
	==> storage-provisioner [34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c] <==
	I1026 15:20:11.086900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:20:41.093176       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [424850f4f7a96508483b33e142e2921faab21e2cccc2ce09d8328764c50179f0] <==
	I1026 15:20:41.682396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:20:41.712306       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:20:41.712436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:20:41.721938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:45.178150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:49.438898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:53.038275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:56.092440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:59.115319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:59.121088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:20:59.121304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:20:59.121511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-494684_32baec7a-4c84-46f5-947d-7fd7f9892fe7!
	I1026 15:20:59.122458       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d25c15-ba1a-4898-94ee-0ef3b44a7fcb", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-494684_32baec7a-4c84-46f5-947d-7fd7f9892fe7 became leader
	W1026 15:20:59.134355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:59.142041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:20:59.222478       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-494684_32baec7a-4c84-46f5-947d-7fd7f9892fe7!
	W1026 15:21:01.148516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:01.168361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:03.187388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:03.213960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:05.217651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:05.230814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684: exit status 2 (765.407495ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-494684
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-494684:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5",
	        "Created": "2025-10-26T15:18:07.847117574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 914658,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:19:51.679630293Z",
	            "FinishedAt": "2025-10-26T15:19:50.173508243Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/hosts",
	        "LogPath": "/var/lib/docker/containers/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5-json.log",
	        "Name": "/default-k8s-diff-port-494684",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-494684:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-494684",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5",
	                "LowerDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8-init/diff:/var/lib/docker/overlay2/628847613aca53e31d7048588dfed4f78a8a4cbaf0e481fc5dd52bc270da2a41/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbc3a8ad63b91d2c814e416292f35c6cae92e42ffe519b757f38d888b4b6a8d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-494684",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-494684/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-494684",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-494684",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-494684",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c906b02e8fd8bfa9eacbcae7e26dbef4030a4ac62750eff8dda539312a408e1",
	            "SandboxKey": "/var/run/docker/netns/8c906b02e8fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-494684": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:b8:44:e1:d3:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3a8cf1602f3f72d6a70a2be8fdd96fd095eb34b48ad075b2aa41a3d8b9118a52",
	                    "EndpointID": "6dedba145c61eb600863158e3bfbd6ebf46c1ab179b3fe6c6d1c272ad52fdf72",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-494684",
	                        "ff68c01604a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
E1026 15:21:07.364799  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684: exit status 2 (465.895346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-494684 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-494684 logs -n 25: (1.459690516s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-954807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │                     │
	│ stop    │ -p no-preload-954807 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ addons  │ enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:18 UTC │
	│ start   │ -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:18 UTC │ 26 Oct 25 15:19 UTC │
	│ image   │ no-preload-954807 image list --format=json                                                                                                                                                                                                    │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ pause   │ -p no-preload-954807 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-494684 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-494684 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ delete  │ -p no-preload-954807                                                                                                                                                                                                                          │ no-preload-954807            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-494684 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:19 UTC │
	│ start   │ -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:19 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-810872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ stop    │ -p newest-cni-810872 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-810872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ image   │ newest-cni-810872 image list --format=json                                                                                                                                                                                                    │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p newest-cni-810872 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ delete  │ -p newest-cni-810872                                                                                                                                                                                                                          │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ delete  │ -p newest-cni-810872                                                                                                                                                                                                                          │ newest-cni-810872            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p auto-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-337407                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │                     │
	│ image   │ default-k8s-diff-port-494684 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ pause   │ -p default-k8s-diff-port-494684 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-494684 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:20:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:20:59.750189  922177 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:20:59.750340  922177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:59.750372  922177 out.go:374] Setting ErrFile to fd 2...
	I1026 15:20:59.750384  922177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:20:59.751220  922177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:20:59.751793  922177 out.go:368] Setting JSON to false
	I1026 15:20:59.752987  922177 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18212,"bootTime":1761473848,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:20:59.753090  922177 start.go:141] virtualization:  
	I1026 15:20:59.756796  922177 out.go:179] * [auto-337407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:20:59.760666  922177 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:20:59.760772  922177 notify.go:220] Checking for updates...
	I1026 15:20:59.766772  922177 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:20:59.769787  922177 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:20:59.772846  922177 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:20:59.775792  922177 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:20:59.778680  922177 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:20:59.782229  922177 config.go:182] Loaded profile config "default-k8s-diff-port-494684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:20:59.782380  922177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:20:59.808378  922177 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:20:59.808504  922177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:59.870149  922177 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:59.861144199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:59.870257  922177 docker.go:318] overlay module found
	I1026 15:20:59.873355  922177 out.go:179] * Using the docker driver based on user configuration
	I1026 15:20:59.876262  922177 start.go:305] selected driver: docker
	I1026 15:20:59.876278  922177 start.go:925] validating driver "docker" against <nil>
	I1026 15:20:59.876291  922177 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:20:59.877078  922177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:20:59.935162  922177 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 15:20:59.923936674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:20:59.935319  922177 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:20:59.935546  922177 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:20:59.938548  922177 out.go:179] * Using Docker driver with root privileges
	I1026 15:20:59.941477  922177 cni.go:84] Creating CNI manager for ""
	I1026 15:20:59.941551  922177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:20:59.941563  922177 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:20:59.941643  922177 start.go:349] cluster config:
	{Name:auto-337407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-337407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 15:20:59.944650  922177 out.go:179] * Starting "auto-337407" primary control-plane node in "auto-337407" cluster
	I1026 15:20:59.947505  922177 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:20:59.950399  922177 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:20:59.953144  922177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:20:59.953196  922177 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 15:20:59.953208  922177 cache.go:58] Caching tarball of preloaded images
	I1026 15:20:59.953247  922177 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:20:59.953306  922177 preload.go:233] Found /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1026 15:20:59.953319  922177 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:20:59.953426  922177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/config.json ...
	I1026 15:20:59.953447  922177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/config.json: {Name:mk7d5e86c8305b18c8b686019494c5cba52ee218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:20:59.973544  922177 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:20:59.973568  922177 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:20:59.973581  922177 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:20:59.973604  922177 start.go:360] acquireMachinesLock for auto-337407: {Name:mke4a53cda5bf2983bbbbd2fb9f51db15123b513 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:20:59.973710  922177 start.go:364] duration metric: took 84.867µs to acquireMachinesLock for "auto-337407"
	I1026 15:20:59.973741  922177 start.go:93] Provisioning new machine with config: &{Name:auto-337407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-337407 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:20:59.973815  922177 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:20:59.977153  922177 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:20:59.977390  922177 start.go:159] libmachine.API.Create for "auto-337407" (driver="docker")
	I1026 15:20:59.977441  922177 client.go:168] LocalClient.Create starting
	I1026 15:20:59.977531  922177 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/ca.pem
	I1026 15:20:59.977570  922177 main.go:141] libmachine: Decoding PEM data...
	I1026 15:20:59.977588  922177 main.go:141] libmachine: Parsing certificate...
	I1026 15:20:59.977647  922177 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-713593/.minikube/certs/cert.pem
	I1026 15:20:59.977678  922177 main.go:141] libmachine: Decoding PEM data...
	I1026 15:20:59.977693  922177 main.go:141] libmachine: Parsing certificate...
	I1026 15:20:59.978058  922177 cli_runner.go:164] Run: docker network inspect auto-337407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:20:59.994460  922177 cli_runner.go:211] docker network inspect auto-337407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:20:59.994584  922177 network_create.go:284] running [docker network inspect auto-337407] to gather additional debugging logs...
	I1026 15:20:59.994616  922177 cli_runner.go:164] Run: docker network inspect auto-337407
	W1026 15:21:00.042940  922177 cli_runner.go:211] docker network inspect auto-337407 returned with exit code 1
	I1026 15:21:00.042984  922177 network_create.go:287] error running [docker network inspect auto-337407]: docker network inspect auto-337407: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-337407 not found
	I1026 15:21:00.043000  922177 network_create.go:289] output of [docker network inspect auto-337407]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-337407 not found
	
	** /stderr **
	I1026 15:21:00.043149  922177 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:21:00.113489  922177 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
	I1026 15:21:00.113940  922177 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fbc8966b2b43 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:06:24:03:84:06} reservation:<nil>}
	I1026 15:21:00.114627  922177 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ee90ee61ab30 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:3d:16:3a:41} reservation:<nil>}
	I1026 15:21:00.115048  922177 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3a8cf1602f3f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:d6:72:bf:60:a9} reservation:<nil>}
	I1026 15:21:00.118182  922177 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a302c0}
	I1026 15:21:00.118243  922177 network_create.go:124] attempt to create docker network auto-337407 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 15:21:00.118345  922177 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-337407 auto-337407
	I1026 15:21:00.429999  922177 network_create.go:108] docker network auto-337407 192.168.85.0/24 created
	I1026 15:21:00.430034  922177 kic.go:121] calculated static IP "192.168.85.2" for the "auto-337407" container
	I1026 15:21:00.430116  922177 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:21:00.451253  922177 cli_runner.go:164] Run: docker volume create auto-337407 --label name.minikube.sigs.k8s.io=auto-337407 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:21:00.473654  922177 oci.go:103] Successfully created a docker volume auto-337407
	I1026 15:21:00.473760  922177 cli_runner.go:164] Run: docker run --rm --name auto-337407-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-337407 --entrypoint /usr/bin/test -v auto-337407:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:21:01.048253  922177 oci.go:107] Successfully prepared a docker volume auto-337407
	I1026 15:21:01.048303  922177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:21:01.048324  922177 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:21:01.048398  922177 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-337407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.20876786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.217396861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.222114772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.246884274Z" level=info msg="Created container 71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs/dashboard-metrics-scraper" id=ac3dbfbb-ed2b-4119-ba7b-a7b4c94a34b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.248123221Z" level=info msg="Starting container: 71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796" id=adfc98f6-f5d5-4c21-bfeb-62f1f74e5acf name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.252252013Z" level=info msg="Started container" PID=1653 containerID=71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs/dashboard-metrics-scraper id=adfc98f6-f5d5-4c21-bfeb-62f1f74e5acf name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f8316ac436250acd2dc18f7b7a01d77164123cd4a1abd977ebf0dc5dafd34c4
	Oct 26 15:20:46 default-k8s-diff-port-494684 conmon[1651]: conmon 71f2cf630e8f015c4901 <ninfo>: container 1653 exited with status 1
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.646479549Z" level=info msg="Removing container: ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260" id=04e2182b-fc13-4975-b9e3-215e6221ff46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.655004918Z" level=info msg="Error loading conmon cgroup of container ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260: cgroup deleted" id=04e2182b-fc13-4975-b9e3-215e6221ff46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:20:46 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:46.661924813Z" level=info msg="Removed container ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs/dashboard-metrics-scraper" id=04e2182b-fc13-4975-b9e3-215e6221ff46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.036892698Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.041339401Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.041408087Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.041432046Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.049812191Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.04985156Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.049873706Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.054586619Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.054633044Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.054655018Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.059525799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.059571527Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.059596734Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.063753062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:20:51 default-k8s-diff-port-494684 crio[650]: time="2025-10-26T15:20:51.063806888Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	71f2cf630e8f0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   7f8316ac43625       dashboard-metrics-scraper-6ffb444bf9-nkdqs             kubernetes-dashboard
	424850f4f7a96       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   0e5e3404b749d       storage-provisioner                                    kube-system
	b36b45e3ea3a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   646e45a350669       kubernetes-dashboard-855c9754f9-f9ct2                  kubernetes-dashboard
	9f16314e48b8f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   33faa6d0d72ca       kindnet-bfc62                                          kube-system
	34203d861db1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   0e5e3404b749d       storage-provisioner                                    kube-system
	013ec6f98c014       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   a53afe307bc93       coredns-66bc5c9577-zm8vb                               kube-system
	d17b0acafce8a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   110cb0e10512f       busybox                                                default
	a0180faaf0f1b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   dffd2b39d2c6d       kube-proxy-nbcd6                                       kube-system
	241c767113e68       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   8d61c1e93f0a7       kube-apiserver-default-k8s-diff-port-494684            kube-system
	7f98f8d7b370c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1cfcf12e504bf       etcd-default-k8s-diff-port-494684                      kube-system
	726d76ef97966       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   55fdd4aa541a4       kube-scheduler-default-k8s-diff-port-494684            kube-system
	76f8254b92018       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   4b7a8f8e76197       kube-controller-manager-default-k8s-diff-port-494684   kube-system
	
	
	==> coredns [013ec6f98c0140858003af5f3659553f790b05e213708d5857f92ea159423b1a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59170 - 54697 "HINFO IN 8430478019251975376.1677785563822640210. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012579987s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-494684
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-494684
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-494684
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-494684
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:20:49 +0000   Sun, 26 Oct 2025 15:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-494684
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a6e20c02-f12b-4169-8ea1-8297398ff607
	  Boot ID:                    f26e674d-cfe0-4f37-8155-b6cf640e5788
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-zm8vb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-default-k8s-diff-port-494684                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-bfc62                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m29s
	  kube-system                 kube-apiserver-default-k8s-diff-port-494684             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-494684    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-nbcd6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-scheduler-default-k8s-diff-port-494684             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nkdqs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f9ct2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s (x8 over 2m43s)  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m29s                  node-controller  Node default-k8s-diff-port-494684 event: Registered Node default-k8s-diff-port-494684 in Controller
	  Normal   NodeReady                106s                   kubelet          Node default-k8s-diff-port-494684 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-494684 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-494684 event: Registered Node default-k8s-diff-port-494684 in Controller
	
	
	==> dmesg <==
	[ +17.917847] overlayfs: idmapped layers are currently not supported
	[Oct26 14:59] overlayfs: idmapped layers are currently not supported
	[ +18.145622] overlayfs: idmapped layers are currently not supported
	[Oct26 15:00] overlayfs: idmapped layers are currently not supported
	[Oct26 15:01] overlayfs: idmapped layers are currently not supported
	[Oct26 15:02] overlayfs: idmapped layers are currently not supported
	[Oct26 15:03] overlayfs: idmapped layers are currently not supported
	[Oct26 15:05] overlayfs: idmapped layers are currently not supported
	[Oct26 15:06] overlayfs: idmapped layers are currently not supported
	[Oct26 15:07] overlayfs: idmapped layers are currently not supported
	[Oct26 15:09] overlayfs: idmapped layers are currently not supported
	[Oct26 15:10] overlayfs: idmapped layers are currently not supported
	[Oct26 15:11] overlayfs: idmapped layers are currently not supported
	[ +14.895337] overlayfs: idmapped layers are currently not supported
	[Oct26 15:12] overlayfs: idmapped layers are currently not supported
	[ +38.780453] overlayfs: idmapped layers are currently not supported
	[Oct26 15:13] overlayfs: idmapped layers are currently not supported
	[Oct26 15:15] overlayfs: idmapped layers are currently not supported
	[Oct26 15:16] overlayfs: idmapped layers are currently not supported
	[ +12.563674] overlayfs: idmapped layers are currently not supported
	[Oct26 15:18] overlayfs: idmapped layers are currently not supported
	[  +8.045984] overlayfs: idmapped layers are currently not supported
	[Oct26 15:20] overlayfs: idmapped layers are currently not supported
	[  +9.178014] overlayfs: idmapped layers are currently not supported
	[ +33.140474] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f98f8d7b370c0262b7b8305334add4092bc7bb084d8f736c2dfb8914762723b] <==
	{"level":"warn","ts":"2025-10-26T15:20:04.898097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:04.957929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:04.997625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.058708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.108781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.146193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.226573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.273662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.305674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.344855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.389605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.429561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.496822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.529418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.558924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.613519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.659039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.693993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.746983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.785316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.850925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.908889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.932042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:05.996863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:20:06.204403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:21:08 up  5:03,  0 user,  load average: 3.75, 3.79, 3.23
	Linux default-k8s-diff-port-494684 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f16314e48b8fd0624cda906bc9d32caef8c5a24e782e7bfe524002f61e3eab3] <==
	I1026 15:20:10.798506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:20:10.868026       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:20:10.868177       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:20:10.868190       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:20:10.868205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:20:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:20:11.036188       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:20:11.037324       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:20:11.037408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:20:11.038287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:20:41.036640       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:20:41.038021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:20:41.038141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 15:20:41.038235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 15:20:42.438475       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:20:42.438603       1 metrics.go:72] Registering metrics
	I1026 15:20:42.438695       1 controller.go:711] "Syncing nftables rules"
	I1026 15:20:51.036370       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:20:51.036525       1 main.go:301] handling current node
	I1026 15:21:01.040810       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:21:01.040847       1 main.go:301] handling current node
	
	
	==> kube-apiserver [241c767113e68c1f22448bdbebeb0a4e52ed25a88c70b543c9b9d67191107fe6] <==
	I1026 15:20:08.074880       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:20:08.095694       1 policy_source.go:240] refreshing policies
	I1026 15:20:08.106459       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:20:08.109266       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:20:08.120899       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:20:08.129252       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:20:08.132547       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:20:08.156999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:20:08.169786       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:20:08.169951       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:20:08.169963       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:20:08.170071       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:20:08.234412       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1026 15:20:08.435568       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:20:08.759416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:20:09.658734       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:20:09.827614       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:20:09.984201       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:20:10.119977       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:20:10.789247       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.133.4"}
	I1026 15:20:10.958439       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.73.82"}
	I1026 15:20:13.584086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:13.584298       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:20:13.880045       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:20:13.932061       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [76f8254b92018f8ae8e793d8373b480a5d5fd6589077c7f793456dfa1a8a71cc] <==
	I1026 15:20:13.451722       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:20:13.453997       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:20:13.454107       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:20:13.454168       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:20:13.454214       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:20:13.454243       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:20:13.454398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:13.457754       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:20:13.462287       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:20:13.466741       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:20:13.468440       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:20:13.472550       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:20:13.472772       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:20:13.473903       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:20:13.473956       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:20:13.477118       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:20:13.477222       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:20:13.480854       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:20:13.482003       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:20:13.486222       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:20:13.486584       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:20:13.492836       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:20:13.511290       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:20:13.511374       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:20:13.511404       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a0180faaf0f1bee4ee7d363cbda8c2925f3a5fa8d74fe22adef91512ea23fb5a] <==
	I1026 15:20:11.745963       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:20:11.862796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:20:11.988761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:20:12.041837       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:20:12.043377       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:20:12.800937       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:20:12.801062       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:20:12.903188       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:20:12.903520       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:20:12.903533       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:12.904597       1 config.go:200] "Starting service config controller"
	I1026 15:20:12.904609       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:20:12.905878       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:20:12.905891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:20:12.905925       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:20:12.905930       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:20:12.906573       1 config.go:309] "Starting node config controller"
	I1026 15:20:12.906580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:20:12.906586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:20:13.010748       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:20:13.010794       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:20:13.050436       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [726d76ef979662bc62bda3f5d764d66efbaf72659b362834d790c61451facabd] <==
	I1026 15:20:11.665029       1 serving.go:386] Generated self-signed cert in-memory
	I1026 15:20:12.968832       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:20:12.971425       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:20:12.981700       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:20:12.981949       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 15:20:12.982079       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 15:20:12.983238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:12.991154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:20:13.020831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:20:13.053659       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 15:20:13.050551       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:13.054118       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:20:13.082210       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 15:20:13.154774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.215950     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9672d\" (UniqueName: \"kubernetes.io/projected/27f1fe6d-9160-4237-810e-cd2e3879314c-kube-api-access-9672d\") pod \"dashboard-metrics-scraper-6ffb444bf9-nkdqs\" (UID: \"27f1fe6d-9160-4237-810e-cd2e3879314c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.216026     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2313a016-1717-46d4-b96a-c1690b8d1d77-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-f9ct2\" (UID: \"2313a016-1717-46d4-b96a-c1690b8d1d77\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f9ct2"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.216046     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d856j\" (UniqueName: \"kubernetes.io/projected/2313a016-1717-46d4-b96a-c1690b8d1d77-kube-api-access-d856j\") pod \"kubernetes-dashboard-855c9754f9-f9ct2\" (UID: \"2313a016-1717-46d4-b96a-c1690b8d1d77\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f9ct2"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:14.216063     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/27f1fe6d-9160-4237-810e-cd2e3879314c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nkdqs\" (UID: \"27f1fe6d-9160-4237-810e-cd2e3879314c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs"
	Oct 26 15:20:14 default-k8s-diff-port-494684 kubelet[777]: W1026 15:20:14.402763     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff68c01604a65170fb7e16833f4036e5ed0ce181e247376f63c5588a7fe37aa5/crio-646e45a350669a2b63aca826b6a73acecd0155ce3b3bcbc761d0db9178788421 WatchSource:0}: Error finding container 646e45a350669a2b63aca826b6a73acecd0155ce3b3bcbc761d0db9178788421: Status 404 returned error can't find the container with id 646e45a350669a2b63aca826b6a73acecd0155ce3b3bcbc761d0db9178788421
	Oct 26 15:20:17 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:17.338937     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:20:22 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:22.595764     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f9ct2" podStartSLOduration=2.30950248 podStartE2EDuration="9.595742018s" podCreationTimestamp="2025-10-26 15:20:13 +0000 UTC" firstStartedPulling="2025-10-26 15:20:14.413847334 +0000 UTC m=+14.564711698" lastFinishedPulling="2025-10-26 15:20:21.700086863 +0000 UTC m=+21.850951236" observedRunningTime="2025-10-26 15:20:22.581174875 +0000 UTC m=+22.732039290" watchObservedRunningTime="2025-10-26 15:20:22.595742018 +0000 UTC m=+22.746606383"
	Oct 26 15:20:27 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:27.576824     777 scope.go:117] "RemoveContainer" containerID="652735924df9826b160b97d04ae2c3a278a5d98999a9371a73deebcddde0f704"
	Oct 26 15:20:28 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:28.582795     777 scope.go:117] "RemoveContainer" containerID="652735924df9826b160b97d04ae2c3a278a5d98999a9371a73deebcddde0f704"
	Oct 26 15:20:28 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:28.583151     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:28 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:28.583326     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:29 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:29.587452     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:29 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:29.593056     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:34 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:34.428053     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:34 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:34.428250     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:41 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:41.621085     777 scope.go:117] "RemoveContainer" containerID="34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:46.203921     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:46.635990     777 scope.go:117] "RemoveContainer" containerID="ef0c870c38568763ae4a5bf73d12372f03aaa3c6972cb69e66720adcee4d2260"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:46.636507     777 scope.go:117] "RemoveContainer" containerID="71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	Oct 26 15:20:46 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:46.636852     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:20:54 default-k8s-diff-port-494684 kubelet[777]: I1026 15:20:54.427080     777 scope.go:117] "RemoveContainer" containerID="71f2cf630e8f015c4901ff64cf45d8185764c85ff02cf750e109a19be44c6796"
	Oct 26 15:20:54 default-k8s-diff-port-494684 kubelet[777]: E1026 15:20:54.427313     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nkdqs_kubernetes-dashboard(27f1fe6d-9160-4237-810e-cd2e3879314c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nkdqs" podUID="27f1fe6d-9160-4237-810e-cd2e3879314c"
	Oct 26 15:21:01 default-k8s-diff-port-494684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:21:01 default-k8s-diff-port-494684 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:21:01 default-k8s-diff-port-494684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b36b45e3ea3a24ee46032be9e1b20ead00a7e68c7a9149c026a05817148a912a] <==
	2025/10/26 15:20:21 Starting overwatch
	2025/10/26 15:20:21 Using namespace: kubernetes-dashboard
	2025/10/26 15:20:21 Using in-cluster config to connect to apiserver
	2025/10/26 15:20:21 Using secret token for csrf signing
	2025/10/26 15:20:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:20:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:20:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:20:21 Generating JWE encryption key
	2025/10/26 15:20:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:20:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:20:21 Initializing JWE encryption key from synchronized object
	2025/10/26 15:20:21 Creating in-cluster Sidecar client
	2025/10/26 15:20:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:20:21 Serving insecurely on HTTP port: 9090
	2025/10/26 15:20:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [34203d861db1b513410f70689c4c375b55b095552bd392f44b4fecf2d42c911c] <==
	I1026 15:20:11.086900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:20:41.093176       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [424850f4f7a96508483b33e142e2921faab21e2cccc2ce09d8328764c50179f0] <==
	I1026 15:20:41.682396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:20:41.712306       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:20:41.712436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:20:41.721938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:45.178150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:49.438898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:53.038275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:56.092440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:59.115319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:59.121088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:20:59.121304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:20:59.121511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-494684_32baec7a-4c84-46f5-947d-7fd7f9892fe7!
	I1026 15:20:59.122458       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d25c15-ba1a-4898-94ee-0ef3b44a7fcb", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-494684_32baec7a-4c84-46f5-947d-7fd7f9892fe7 became leader
	W1026 15:20:59.134355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:20:59.142041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:20:59.222478       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-494684_32baec7a-4c84-46f5-947d-7fd7f9892fe7!
	W1026 15:21:01.148516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:01.168361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:03.187388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:03.213960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:05.217651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:05.230814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:07.234207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:21:07.239273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684: exit status 2 (354.525264ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.26s)
E1026 15:27:09.751536  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.218703  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.225036  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.236392  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.257781  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.299251  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.380709  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.542226  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:29.865124  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:30.506904  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:31.788321  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:34.350677  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:37.987582  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:37.994015  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:38.010073  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:38.031574  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:38.073037  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:38.154596  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:38.316174  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:38.637882  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:39.279218  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:39.472764  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:39.980077  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:40.560997  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:43.122998  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:48.244881  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:27:49.714324  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/auto-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.15
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.39
9 TestDownloadOnly/v1.28.0/DeleteAll 0.36
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.24
12 TestDownloadOnly/v1.34.1/json-events 5.26
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 177.1
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.76
48 TestAddons/StoppedEnableDisable 12.47
49 TestCertOptions 37.38
50 TestCertExpiration 333.33
52 TestForceSystemdFlag 41.33
53 TestForceSystemdEnv 40.71
58 TestErrorSpam/setup 35.18
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.27
61 TestErrorSpam/pause 5.46
62 TestErrorSpam/unpause 6.62
63 TestErrorSpam/stop 12.11
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.98
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.07
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 37.4
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.5
87 TestFunctional/serial/InvalidService 4.12
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 9.04
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 24.89
101 TestFunctional/parallel/SSHCmd 0.57
102 TestFunctional/parallel/CpCmd 2.03
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 2.19
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
113 TestFunctional/parallel/License 0.3
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 1.2
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
121 TestFunctional/parallel/ImageCommands/Setup 0.7
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.38
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.52
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
150 TestFunctional/parallel/ProfileCmd/profile_list 0.45
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
152 TestFunctional/parallel/MountCmd/any-port 8.05
153 TestFunctional/parallel/MountCmd/specific-port 2.14
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.36
155 TestFunctional/delete_echo-server_images 0.06
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 208.04
163 TestMultiControlPlane/serial/DeployApp 42.48
164 TestMultiControlPlane/serial/PingHostFromPods 1.46
165 TestMultiControlPlane/serial/AddWorkerNode 61.37
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.51
169 TestMultiControlPlane/serial/StopSecondaryNode 12.93
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
171 TestMultiControlPlane/serial/RestartSecondaryNode 31.43
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.35
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.68
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.24
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 36.41
177 TestMultiControlPlane/serial/RestartCluster 66.43
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
179 TestMultiControlPlane/serial/AddSecondaryNode 54.87
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
184 TestJSONOutput/start/Command 77.87
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.87
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 44.17
210 TestKicCustomNetwork/use_default_bridge_network 37.99
211 TestKicExistingNetwork 37.69
212 TestKicCustomSubnet 38.8
213 TestKicStaticIP 40.17
214 TestMainNoArgs 0.07
215 TestMinikubeProfile 76.59
218 TestMountStart/serial/StartWithMountFirst 7.34
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 6.92
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.75
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.3
225 TestMountStart/serial/RestartStopped 8.12
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 137.3
230 TestMultiNode/serial/DeployApp2Nodes 6.21
231 TestMultiNode/serial/PingHostFrom2Pods 0.9
232 TestMultiNode/serial/AddNode 58.35
233 TestMultiNode/serial/MultiNodeLabels 0.1
234 TestMultiNode/serial/ProfileList 0.71
235 TestMultiNode/serial/CopyFile 10.56
236 TestMultiNode/serial/StopNode 2.41
237 TestMultiNode/serial/StartAfterStop 8.44
238 TestMultiNode/serial/RestartKeepsNodes 78.08
239 TestMultiNode/serial/DeleteNode 5.64
240 TestMultiNode/serial/StopMultiNode 24.25
241 TestMultiNode/serial/RestartMultiNode 55.82
242 TestMultiNode/serial/ValidateNameConflict 35.65
247 TestPreload 129.19
249 TestScheduledStopUnix 106.24
252 TestInsufficientStorage 11.31
253 TestRunningBinaryUpgrade 53.65
255 TestKubernetesUpgrade 216.17
256 TestMissingContainerUpgrade 123.91
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 53.69
260 TestNoKubernetes/serial/StartWithStopK8s 8.72
261 TestNoKubernetes/serial/Start 9.3
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
263 TestNoKubernetes/serial/ProfileList 0.69
264 TestNoKubernetes/serial/Stop 1.29
265 TestNoKubernetes/serial/StartNoArgs 6.91
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
267 TestStoppedBinaryUpgrade/Setup 0.83
268 TestStoppedBinaryUpgrade/Upgrade 58.93
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
278 TestPause/serial/Start 86.79
279 TestPause/serial/SecondStartNoReconfiguration 29.76
287 TestNetworkPlugins/group/false 4.56
293 TestStartStop/group/old-k8s-version/serial/FirstStart 64.61
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
296 TestStartStop/group/old-k8s-version/serial/Stop 12.01
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
298 TestStartStop/group/old-k8s-version/serial/SecondStart 47.82
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
304 TestStartStop/group/embed-certs/serial/FirstStart 76.93
305 TestStartStop/group/embed-certs/serial/DeployApp 10.33
307 TestStartStop/group/embed-certs/serial/Stop 12.14
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
309 TestStartStop/group/embed-certs/serial/SecondStart 61.92
311 TestStartStop/group/no-preload/serial/FirstStart 73.41
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
316 TestStartStop/group/no-preload/serial/DeployApp 8.57
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.64
320 TestStartStop/group/no-preload/serial/Stop 12.21
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
322 TestStartStop/group/no-preload/serial/SecondStart 58.31
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.53
331 TestStartStop/group/newest-cni/serial/FirstStart 45.3
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.61
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/Stop 2.14
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
338 TestStartStop/group/newest-cni/serial/SecondStart 16.09
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
345 TestNetworkPlugins/group/auto/Start 88.93
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
348 TestNetworkPlugins/group/kindnet/Start 85.48
349 TestNetworkPlugins/group/auto/KubeletFlags 0.32
350 TestNetworkPlugins/group/auto/NetCatPod 9.3
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
356 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
357 TestNetworkPlugins/group/kindnet/DNS 0.29
358 TestNetworkPlugins/group/kindnet/Localhost 0.16
359 TestNetworkPlugins/group/kindnet/HairPin 0.18
360 TestNetworkPlugins/group/calico/Start 81.16
361 TestNetworkPlugins/group/custom-flannel/Start 71.86
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.33
364 TestNetworkPlugins/group/calico/NetCatPod 10.28
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
367 TestNetworkPlugins/group/calico/DNS 0.22
368 TestNetworkPlugins/group/calico/Localhost 0.13
369 TestNetworkPlugins/group/calico/HairPin 0.15
370 TestNetworkPlugins/group/custom-flannel/DNS 0.23
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
373 TestNetworkPlugins/group/enable-default-cni/Start 81.82
374 TestNetworkPlugins/group/flannel/Start 68.72
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.38
379 TestNetworkPlugins/group/flannel/NetCatPod 12.37
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
381 TestNetworkPlugins/group/flannel/DNS 0.19
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
384 TestNetworkPlugins/group/flannel/Localhost 0.22
385 TestNetworkPlugins/group/flannel/HairPin 0.2
386 TestNetworkPlugins/group/bridge/Start 48.67
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
388 TestNetworkPlugins/group/bridge/NetCatPod 11.28
389 TestNetworkPlugins/group/bridge/DNS 0.16
390 TestNetworkPlugins/group/bridge/Localhost 0.12
391 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (5.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-638833 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-638833 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.146636573s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1026 14:14:33.941746  715440 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1026 14:14:33.941837  715440 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-638833
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-638833: exit status 85 (387.711799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-638833 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-638833 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:28.843234  715446 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:28.843352  715446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:28.843360  715446 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:28.843365  715446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:28.843623  715446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	W1026 14:14:28.843760  715446 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21664-713593/.minikube/config/config.json: open /home/jenkins/minikube-integration/21664-713593/.minikube/config/config.json: no such file or directory
	I1026 14:14:28.844165  715446 out.go:368] Setting JSON to true
	I1026 14:14:28.845032  715446 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14221,"bootTime":1761473848,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:14:28.845098  715446 start.go:141] virtualization:  
	I1026 14:14:28.849176  715446 out.go:99] [download-only-638833] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1026 14:14:28.849354  715446 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 14:14:28.849439  715446 notify.go:220] Checking for updates...
	I1026 14:14:28.852318  715446 out.go:171] MINIKUBE_LOCATION=21664
	I1026 14:14:28.855273  715446 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:28.858163  715446 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:14:28.861045  715446 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:14:28.863977  715446 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1026 14:14:28.869784  715446 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 14:14:28.870104  715446 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:28.895100  715446 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:14:28.895214  715446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:28.953867  715446 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-26 14:14:28.945098336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:28.953978  715446 docker.go:318] overlay module found
	I1026 14:14:28.957039  715446 out.go:99] Using the docker driver based on user configuration
	I1026 14:14:28.957086  715446 start.go:305] selected driver: docker
	I1026 14:14:28.957099  715446 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:28.957223  715446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:29.018300  715446 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-26 14:14:29.007899806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:29.018461  715446 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:29.018764  715446 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1026 14:14:29.018922  715446 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 14:14:29.022188  715446 out.go:171] Using Docker driver with root privileges
	I1026 14:14:29.025309  715446 cni.go:84] Creating CNI manager for ""
	I1026 14:14:29.025398  715446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:29.025414  715446 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:29.025513  715446 start.go:349] cluster config:
	{Name:download-only-638833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-638833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:14:29.028805  715446 out.go:99] Starting "download-only-638833" primary control-plane node in "download-only-638833" cluster
	I1026 14:14:29.028853  715446 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:29.031969  715446 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:29.032022  715446 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 14:14:29.032109  715446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:29.053381  715446 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:29.054174  715446 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:29.054280  715446 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:29.184716  715446 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 14:14:29.184745  715446 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:29.184964  715446 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 14:14:29.188269  715446 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1026 14:14:29.188299  715446 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1026 14:14:29.276916  715446 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1026 14:14:29.277113  715446 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1026 14:14:32.928433  715446 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 14:14:32.928953  715446 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/download-only-638833/config.json ...
	I1026 14:14:32.929019  715446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/download-only-638833/config.json: {Name:mk47a6c27b36a755f498a38c86e4193484715904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:32.929863  715446 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 14:14:32.930199  715446 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-638833 host does not exist
	  To start a cluster, run: "minikube start -p download-only-638833"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-638833
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-758046 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-758046 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.254945679s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1026 14:14:40.184060  715440 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1026 14:14:40.184101  715440 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-758046
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-758046: exit status 85 (94.751138ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-638833 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-638833 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-638833                                                                                                                                                   │ download-only-638833 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ -o=json --download-only -p download-only-758046 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-758046 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:34.976094  715648 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:34.976243  715648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:34.976255  715648 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:34.976285  715648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:34.976592  715648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:14:34.977106  715648 out.go:368] Setting JSON to true
	I1026 14:14:34.977975  715648 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14227,"bootTime":1761473848,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:14:34.978047  715648 start.go:141] virtualization:  
	I1026 14:14:35.020831  715648 out.go:99] [download-only-758046] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 14:14:35.021261  715648 notify.go:220] Checking for updates...
	I1026 14:14:35.051861  715648 out.go:171] MINIKUBE_LOCATION=21664
	I1026 14:14:35.083704  715648 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:35.116823  715648 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:14:35.149154  715648 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:14:35.190960  715648 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1026 14:14:35.256141  715648 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 14:14:35.256439  715648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:35.278348  715648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:14:35.279256  715648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:35.334894  715648 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-26 14:14:35.325723227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:35.335016  715648 docker.go:318] overlay module found
	I1026 14:14:35.381987  715648 out.go:99] Using the docker driver based on user configuration
	I1026 14:14:35.382046  715648 start.go:305] selected driver: docker
	I1026 14:14:35.382059  715648 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:35.382191  715648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:35.444329  715648 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-26 14:14:35.433538846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:14:35.444494  715648 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:35.444836  715648 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1026 14:14:35.444997  715648 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 14:14:35.480420  715648 out.go:171] Using Docker driver with root privileges
	I1026 14:14:35.511928  715648 cni.go:84] Creating CNI manager for ""
	I1026 14:14:35.512016  715648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:35.512028  715648 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:35.512122  715648 start.go:349] cluster config:
	{Name:download-only-758046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-758046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:14:35.544018  715648 out.go:99] Starting "download-only-758046" primary control-plane node in "download-only-758046" cluster
	I1026 14:14:35.544060  715648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:35.576337  715648 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:35.576396  715648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:35.576460  715648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:35.593314  715648 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:35.593442  715648 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:35.593467  715648 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 14:14:35.593472  715648 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 14:14:35.593480  715648 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 14:14:35.629638  715648 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 14:14:35.629664  715648 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:35.629829  715648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:35.660655  715648 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1026 14:14:35.660705  715648 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1026 14:14:35.840074  715648 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1026 14:14:35.840136  715648 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1026 14:14:39.525307  715648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:14:39.525718  715648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/download-only-758046/config.json ...
	I1026 14:14:39.525756  715648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/download-only-758046/config.json: {Name:mkeeb168504fd984c63888e09cbbddaa76c7a1aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:39.525953  715648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:39.526771  715648 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21664-713593/.minikube/cache/bin/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-758046 host does not exist
	  To start a cluster, run: "minikube start -p download-only-758046"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-758046
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 14:14:41.324490  715440 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-069171 --alsologtostderr --binary-mirror http://127.0.0.1:38609 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-069171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-069171
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-501661
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-501661: exit status 85 (75.688079ms)

                                                
                                                
-- stdout --
	* Profile "addons-501661" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-501661"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-501661
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-501661: exit status 85 (71.997472ms)

                                                
                                                
-- stdout --
	* Profile "addons-501661" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-501661"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (177.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-501661 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-501661 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m57.097297621s)
--- PASS: TestAddons/Setup (177.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-501661 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-501661 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.76s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-501661 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-501661 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [27573e52-df77-4510-b9c5-2b87310015f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [27573e52-df77-4510-b9c5-2b87310015f0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003417892s
addons_test.go:694: (dbg) Run:  kubectl --context addons-501661 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-501661 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-501661 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-501661 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-501661
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-501661: (12.182852312s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-501661
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-501661
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-501661
--- PASS: TestAddons/StoppedEnableDisable (12.47s)

                                                
                                    
x
+
TestCertOptions (37.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-209492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.500039322s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-209492 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-209492 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-209492 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-209492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-209492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-209492: (2.109669849s)
--- PASS: TestCertOptions (37.38s)

                                                
                                    
x
+
TestCertExpiration (333.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-963871 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-963871 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.029203243s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-963871 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m51.558662654s)
helpers_test.go:175: Cleaning up "cert-expiration-963871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-963871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-963871: (2.742877212s)
--- PASS: TestCertExpiration (333.33s)

                                                
                                    
x
+
TestForceSystemdFlag (41.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-149728 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-149728 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.920099458s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-149728 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-149728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-149728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-149728: (3.108723632s)
--- PASS: TestForceSystemdFlag (41.33s)

                                                
                                    
x
+
TestForceSystemdEnv (40.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-969063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.808745894s)
helpers_test.go:175: Cleaning up "force-systemd-env-969063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-969063
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-969063: (2.902198031s)
--- PASS: TestForceSystemdEnv (40.71s)

                                                
                                    
x
+
TestErrorSpam/setup (35.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-781622 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-781622 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-781622 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-781622 --driver=docker  --container-runtime=crio: (35.178190336s)
--- PASS: TestErrorSpam/setup (35.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 status
--- PASS: TestErrorSpam/status (1.27s)

                                                
                                    
x
+
TestErrorSpam/pause (5.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause: exit status 80 (1.712481309s)

                                                
                                                
-- stdout --
	* Pausing node nospam-781622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:21:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause: exit status 80 (2.219981704s)

                                                
                                                
-- stdout --
	* Pausing node nospam-781622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:21:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause: exit status 80 (1.531861929s)

                                                
                                                
-- stdout --
	* Pausing node nospam-781622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:21:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause: exit status 80 (1.947034541s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-781622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:21:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause: exit status 80 (2.325513505s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-781622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:21:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause: exit status 80 (2.346551643s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-781622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:21:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.62s)

                                                
                                    
x
+
TestErrorSpam/stop (12.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 stop: (11.904183209s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-781622 --log_dir /tmp/nospam-781622 stop
--- PASS: TestErrorSpam/stop (12.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21664-713593/.minikube/files/etc/test/nested/copy/715440/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-707472 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1026 14:22:39.989989  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:39.996481  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:40.007786  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:40.029116  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:40.070438  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:40.151777  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:40.313209  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:40.634840  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:41.276300  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:42.557579  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:45.118940  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:22:50.240893  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:00.482818  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:20.964494  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-707472 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.980111926s)
--- PASS: TestFunctional/serial/StartWithProxy (79.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 14:23:34.420684  715440 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-707472 --alsologtostderr -v=8
E1026 14:24:01.926620  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-707472 --alsologtostderr -v=8: (29.060932196s)
functional_test.go:678: soft start took 29.0660675s for "functional-707472" cluster.
I1026 14:24:03.481942  715440 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-707472 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 cache add registry.k8s.io/pause:3.1: (1.154281822s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 cache add registry.k8s.io/pause:3.3: (1.160217007s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 cache add registry.k8s.io/pause:latest: (1.086750031s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-707472 /tmp/TestFunctionalserialCacheCmdcacheadd_local1672340654/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cache add minikube-local-cache-test:functional-707472
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cache delete minikube-local-cache-test:functional-707472
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-707472
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (310.053412ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 kubectl -- --context functional-707472 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-707472 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-707472 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-707472 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.404415019s)
functional_test.go:776: restart took 37.404531754s for "functional-707472" cluster.
I1026 14:24:48.205933  715440 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-707472 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 logs: (1.475843264s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 logs --file /tmp/TestFunctionalserialLogsFileCmd616476213/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 logs --file /tmp/TestFunctionalserialLogsFileCmd616476213/001/logs.txt: (1.497339732s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-707472 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-707472
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-707472: exit status 115 (411.078865ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30206 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-707472 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 config get cpus: exit status 14 (63.080088ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 config get cpus: exit status 14 (50.233159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-707472 --alsologtostderr -v=1]
2025/10/26 14:35:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-707472 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 743142: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-707472 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-707472 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.546022ms)

                                                
                                                
-- stdout --
	* [functional-707472] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:35:20.226252  742845 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:35:20.226447  742845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:35:20.226792  742845 out.go:374] Setting ErrFile to fd 2...
	I1026 14:35:20.227031  742845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:35:20.227371  742845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:35:20.227940  742845 out.go:368] Setting JSON to false
	I1026 14:35:20.228987  742845 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15473,"bootTime":1761473848,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:35:20.229098  742845 start.go:141] virtualization:  
	I1026 14:35:20.232871  742845 out.go:179] * [functional-707472] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 14:35:20.236847  742845 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:35:20.236994  742845 notify.go:220] Checking for updates...
	I1026 14:35:20.242702  742845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:35:20.245778  742845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:35:20.248829  742845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:35:20.251817  742845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 14:35:20.254763  742845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:35:20.258271  742845 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:35:20.258828  742845 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:35:20.288281  742845 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:35:20.288456  742845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:35:20.346655  742845 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 14:35:20.33733603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:35:20.346766  742845 docker.go:318] overlay module found
	I1026 14:35:20.349855  742845 out.go:179] * Using the docker driver based on existing profile
	I1026 14:35:20.352653  742845 start.go:305] selected driver: docker
	I1026 14:35:20.352679  742845 start.go:925] validating driver "docker" against &{Name:functional-707472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-707472 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:35:20.352811  742845 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:35:20.356294  742845 out.go:203] 
	W1026 14:35:20.359235  742845 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 14:35:20.362110  742845 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-707472 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-707472 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-707472 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.155272ms)

                                                
                                                
-- stdout --
	* [functional-707472] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:35:20.677663  742965 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:35:20.677779  742965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:35:20.677790  742965 out.go:374] Setting ErrFile to fd 2...
	I1026 14:35:20.677795  742965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:35:20.678155  742965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:35:20.678527  742965 out.go:368] Setting JSON to false
	I1026 14:35:20.679402  742965 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15473,"bootTime":1761473848,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 14:35:20.679505  742965 start.go:141] virtualization:  
	I1026 14:35:20.682703  742965 out.go:179] * [functional-707472] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1026 14:35:20.686507  742965 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:35:20.686557  742965 notify.go:220] Checking for updates...
	I1026 14:35:20.690258  742965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:35:20.693096  742965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 14:35:20.695898  742965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 14:35:20.698836  742965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 14:35:20.701705  742965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:35:20.705097  742965 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:35:20.705673  742965 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:35:20.740921  742965 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 14:35:20.741126  742965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:35:20.808633  742965 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-26 14:35:20.798565361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:35:20.808908  742965 docker.go:318] overlay module found
	I1026 14:35:20.812091  742965 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1026 14:35:20.815011  742965 start.go:305] selected driver: docker
	I1026 14:35:20.815034  742965 start.go:925] validating driver "docker" against &{Name:functional-707472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-707472 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:35:20.815139  742965 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:35:20.818709  742965 out.go:203] 
	W1026 14:35:20.821725  742965 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 14:35:20.824626  742965 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9258ea53-ec22-4294-a710-dac9b9885b47] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004137556s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-707472 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-707472 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-707472 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-707472 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e5db331a-37b3-4e0b-90be-f7e2ced194a5] Pending
helpers_test.go:352: "sp-pod" [e5db331a-37b3-4e0b-90be-f7e2ced194a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1026 14:25:23.848015  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [e5db331a-37b3-4e0b-90be-f7e2ced194a5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003467994s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-707472 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-707472 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-707472 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f86e43f4-71a3-48c5-ae71-01cd32e9688d] Pending
helpers_test.go:352: "sp-pod" [f86e43f4-71a3-48c5-ae71-01cd32e9688d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004384629s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-707472 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh -n functional-707472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cp functional-707472:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1195652075/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh -n functional-707472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh -n functional-707472 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/715440/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /etc/test/nested/copy/715440/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/715440.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /etc/ssl/certs/715440.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/715440.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /usr/share/ca-certificates/715440.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7154402.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /etc/ssl/certs/7154402.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7154402.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /usr/share/ca-certificates/7154402.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-707472 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh "sudo systemctl is-active docker": exit status 1 (366.034979ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh "sudo systemctl is-active containerd": exit status 1 (377.201089ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 version -o=json --components: (1.204860696s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-707472 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-707472 image ls --format short --alsologtostderr:
I1026 14:35:31.648225  743505 out.go:360] Setting OutFile to fd 1 ...
I1026 14:35:31.648346  743505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:31.648351  743505 out.go:374] Setting ErrFile to fd 2...
I1026 14:35:31.648355  743505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:31.648691  743505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
I1026 14:35:31.650005  743505 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:31.650168  743505 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:31.650775  743505 cli_runner.go:164] Run: docker container inspect functional-707472 --format={{.State.Status}}
I1026 14:35:31.668130  743505 ssh_runner.go:195] Run: systemctl --version
I1026 14:35:31.668195  743505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-707472
I1026 14:35:31.685473  743505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/functional-707472/id_rsa Username:docker}
I1026 14:35:31.787312  743505 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-707472 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-707472  │ e73df9e60d449 │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-707472 image ls --format table --alsologtostderr:
I1026 14:35:36.317401  743980 out.go:360] Setting OutFile to fd 1 ...
I1026 14:35:36.317568  743980 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:36.317580  743980 out.go:374] Setting ErrFile to fd 2...
I1026 14:35:36.317585  743980 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:36.317846  743980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
I1026 14:35:36.318515  743980 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:36.318636  743980 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:36.319115  743980 cli_runner.go:164] Run: docker container inspect functional-707472 --format={{.State.Status}}
I1026 14:35:36.339224  743980 ssh_runner.go:195] Run: systemctl --version
I1026 14:35:36.339333  743980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-707472
I1026 14:35:36.357014  743980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/functional-707472/id_rsa Username:docker}
I1026 14:35:36.459427  743980 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-707472 image ls --format json --alsologtostderr:
[{"id":"50e922ed475c66d9aaabb69e5cfd3e6198d419955754ca0a8daa884ef4cfe018","repoDigests":["docker.io/library/cfbaa748d6ab4ea0d2437aa44d1e7dcf7605f42b730f70083d80791fadf63401-tmp@sha256:3e622ecda1290610ccda97acc40b39080e9f2703124081dfadf39adda5a06d72"],"repoTags":[],"size":"1638179"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d
7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-m
inikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"e73df9e60d4491c13c878a10821d6a0d59b628c406adad5709a7422bdbcaf417","repoDigests":["localhost/my-image@sha256:7bb5f3b6760b52f1293c0b428ed90caf3d0c5533c0e5d4e52a7e282e89c23931"],"repoTags":["localhost/my-image:functi
onal-707472"],"size":"1640791"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23e
ff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.
34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"b5f57ec6b98676d815366685a0422bd164
ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-707472 image ls --format json --alsologtostderr:
I1026 14:35:36.076642  743944 out.go:360] Setting OutFile to fd 1 ...
I1026 14:35:36.076927  743944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:36.077001  743944 out.go:374] Setting ErrFile to fd 2...
I1026 14:35:36.077022  743944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:36.077499  743944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
I1026 14:35:36.078603  743944 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:36.078847  743944 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:36.079939  743944 cli_runner.go:164] Run: docker container inspect functional-707472 --format={{.State.Status}}
I1026 14:35:36.098198  743944 ssh_runner.go:195] Run: systemctl --version
I1026 14:35:36.098253  743944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-707472
I1026 14:35:36.117051  743944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/functional-707472/id_rsa Username:docker}
I1026 14:35:36.223475  743944 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-707472 image ls --format yaml --alsologtostderr:
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-707472 image ls --format yaml --alsologtostderr:
I1026 14:35:31.876889  743540 out.go:360] Setting OutFile to fd 1 ...
I1026 14:35:31.877067  743540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:31.877098  743540 out.go:374] Setting ErrFile to fd 2...
I1026 14:35:31.877125  743540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:31.877414  743540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
I1026 14:35:31.878028  743540 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:31.878222  743540 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:31.878713  743540 cli_runner.go:164] Run: docker container inspect functional-707472 --format={{.State.Status}}
I1026 14:35:31.896125  743540 ssh_runner.go:195] Run: systemctl --version
I1026 14:35:31.896175  743540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-707472
I1026 14:35:31.913120  743540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/functional-707472/id_rsa Username:docker}
I1026 14:35:32.016118  743540 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh pgrep buildkitd: exit status 1 (273.046417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image build -t localhost/my-image:functional-707472 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-707472 image build -t localhost/my-image:functional-707472 testdata/build --alsologtostderr: (3.447868834s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-707472 image build -t localhost/my-image:functional-707472 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 50e922ed475
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-707472
--> e73df9e60d4
Successfully tagged localhost/my-image:functional-707472
e73df9e60d4491c13c878a10821d6a0d59b628c406adad5709a7422bdbcaf417
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-707472 image build -t localhost/my-image:functional-707472 testdata/build --alsologtostderr:
I1026 14:35:32.377346  743638 out.go:360] Setting OutFile to fd 1 ...
I1026 14:35:32.378246  743638 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:32.378289  743638 out.go:374] Setting ErrFile to fd 2...
I1026 14:35:32.378313  743638 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:35:32.378622  743638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
I1026 14:35:32.379298  743638 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:32.380589  743638 config.go:182] Loaded profile config "functional-707472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:35:32.381183  743638 cli_runner.go:164] Run: docker container inspect functional-707472 --format={{.State.Status}}
I1026 14:35:32.399319  743638 ssh_runner.go:195] Run: systemctl --version
I1026 14:35:32.399390  743638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-707472
I1026 14:35:32.417553  743638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/functional-707472/id_rsa Username:docker}
I1026 14:35:32.523760  743638 build_images.go:161] Building image from path: /tmp/build.4101669951.tar
I1026 14:35:32.523829  743638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 14:35:32.532350  743638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4101669951.tar
I1026 14:35:32.537118  743638 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4101669951.tar: stat -c "%s %y" /var/lib/minikube/build/build.4101669951.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4101669951.tar': No such file or directory
I1026 14:35:32.537160  743638 ssh_runner.go:362] scp /tmp/build.4101669951.tar --> /var/lib/minikube/build/build.4101669951.tar (3072 bytes)
I1026 14:35:32.557643  743638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4101669951
I1026 14:35:32.566009  743638 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4101669951 -xf /var/lib/minikube/build/build.4101669951.tar
I1026 14:35:32.577772  743638 crio.go:315] Building image: /var/lib/minikube/build/build.4101669951
I1026 14:35:32.577846  743638 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-707472 /var/lib/minikube/build/build.4101669951 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1026 14:35:35.749176  743638 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-707472 /var/lib/minikube/build/build.4101669951 --cgroup-manager=cgroupfs: (3.171300906s)
I1026 14:35:35.749252  743638 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4101669951
I1026 14:35:35.758335  743638 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4101669951.tar
I1026 14:35:35.766760  743638 build_images.go:217] Built localhost/my-image:functional-707472 from /tmp/build.4101669951.tar
I1026 14:35:35.766794  743638 build_images.go:133] succeeded building to: functional-707472
I1026 14:35:35.766800  743638 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-707472
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image rm kicbase/echo-server:functional-707472 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-707472 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-707472 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-707472 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-707472 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 739265: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-707472 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-707472 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d0b66e70-4820-43a9-ba08-cd84f25bf48a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d0b66e70-4820-43a9-ba08-cd84f25bf48a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003203045s
I1026 14:25:13.426995  715440 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-707472 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.36.20 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-707472 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 service list -o json
functional_test.go:1504: Took "531.329118ms" to run "out/minikube-linux-arm64 -p functional-707472 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "387.242266ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "63.822073ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "358.557734ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.392315ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdany-port13085791/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761489308630863757" to /tmp/TestFunctionalparallelMountCmdany-port13085791/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761489308630863757" to /tmp/TestFunctionalparallelMountCmdany-port13085791/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761489308630863757" to /tmp/TestFunctionalparallelMountCmdany-port13085791/001/test-1761489308630863757
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.899013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:35:09.008022  715440 retry.go:31] will retry after 557.133767ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 14:35 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 14:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 14:35 test-1761489308630863757
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh cat /mount-9p/test-1761489308630863757
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-707472 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4c0d9bb5-e998-4cda-a24e-5424e558dbc6] Pending
helpers_test.go:352: "busybox-mount" [4c0d9bb5-e998-4cda-a24e-5424e558dbc6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4c0d9bb5-e998-4cda-a24e-5424e558dbc6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4c0d9bb5-e998-4cda-a24e-5424e558dbc6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004367434s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-707472 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdany-port13085791/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdspecific-port2362580255/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.296716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:35:17.029371  715440 retry.go:31] will retry after 737.135578ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdspecific-port2362580255/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-707472 ssh "sudo umount -f /mount-9p": exit status 1 (283.160849ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-707472 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdspecific-port2362580255/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-707472 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-707472 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-707472 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1197507906/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-707472
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-707472
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-707472
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1026 14:37:39.979943  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:39:03.052868  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m27.145257404s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (42.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 kubectl -- rollout status deployment/busybox: (4.863488328s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:18.646825  715440 retry.go:31] will retry after 1.097545312s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:19.912170  715440 retry.go:31] will retry after 1.970838812s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:22.061154  715440 retry.go:31] will retry after 2.168642017s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:24.393526  715440 retry.go:31] will retry after 2.976708441s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:27.543957  715440 retry.go:31] will retry after 5.525688086s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:33.240345  715440 retry.go:31] will retry after 5.260171991s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
I1026 14:39:38.689566  715440 retry.go:31] will retry after 14.546722002s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.2.2 10.244.1.2 10.244.2.3'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-6njhc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-gdd8b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-r82p2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-6njhc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-gdd8b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-r82p2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-6njhc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-gdd8b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-r82p2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (42.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-6njhc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-6njhc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-gdd8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-gdd8b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-r82p2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 kubectl -- exec busybox-7b57f96db7-r82p2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node add --alsologtostderr -v 5
E1026 14:40:00.654378  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:00.660833  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:00.672325  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:00.693922  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:00.735323  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:00.817371  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:00.978856  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:01.304242  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:01.945900  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:03.227401  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:05.788858  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:10.910533  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:21.152598  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:40:41.634486  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 node add --alsologtostderr -v 5: (1m0.252842001s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5: (1.114479668s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-410341 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.113425915s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 status --output json --alsologtostderr -v 5: (1.294885992s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp testdata/cp-test.txt ha-410341:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268936617/001/cp-test_ha-410341.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341:/home/docker/cp-test.txt ha-410341-m02:/home/docker/cp-test_ha-410341_ha-410341-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test_ha-410341_ha-410341-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341:/home/docker/cp-test.txt ha-410341-m03:/home/docker/cp-test_ha-410341_ha-410341-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test_ha-410341_ha-410341-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341:/home/docker/cp-test.txt ha-410341-m04:/home/docker/cp-test_ha-410341_ha-410341-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test_ha-410341_ha-410341-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp testdata/cp-test.txt ha-410341-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268936617/001/cp-test_ha-410341-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m02:/home/docker/cp-test.txt ha-410341:/home/docker/cp-test_ha-410341-m02_ha-410341.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test_ha-410341-m02_ha-410341.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m02:/home/docker/cp-test.txt ha-410341-m03:/home/docker/cp-test_ha-410341-m02_ha-410341-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test_ha-410341-m02_ha-410341-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m02:/home/docker/cp-test.txt ha-410341-m04:/home/docker/cp-test_ha-410341-m02_ha-410341-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test_ha-410341-m02_ha-410341-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp testdata/cp-test.txt ha-410341-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268936617/001/cp-test_ha-410341-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m03:/home/docker/cp-test.txt ha-410341:/home/docker/cp-test_ha-410341-m03_ha-410341.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test_ha-410341-m03_ha-410341.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m03:/home/docker/cp-test.txt ha-410341-m02:/home/docker/cp-test_ha-410341-m03_ha-410341-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test_ha-410341-m03_ha-410341-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m03:/home/docker/cp-test.txt ha-410341-m04:/home/docker/cp-test_ha-410341-m03_ha-410341-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test_ha-410341-m03_ha-410341-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp testdata/cp-test.txt ha-410341-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268936617/001/cp-test_ha-410341-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m04:/home/docker/cp-test.txt ha-410341:/home/docker/cp-test_ha-410341-m04_ha-410341.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341 "sudo cat /home/docker/cp-test_ha-410341-m04_ha-410341.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m04:/home/docker/cp-test.txt ha-410341-m02:/home/docker/cp-test_ha-410341-m04_ha-410341-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m02 "sudo cat /home/docker/cp-test_ha-410341-m04_ha-410341-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 cp ha-410341-m04:/home/docker/cp-test.txt ha-410341-m03:/home/docker/cp-test_ha-410341-m04_ha-410341-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 ssh -n ha-410341-m03 "sudo cat /home/docker/cp-test_ha-410341-m04_ha-410341-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node stop m02 --alsologtostderr -v 5
E1026 14:41:22.595814  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 node stop m02 --alsologtostderr -v 5: (12.142653651s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5: exit status 7 (782.675682ms)

                                                
                                                
-- stdout --
	ha-410341
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-410341-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-410341-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-410341-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:41:32.551351  759243 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:41:32.551578  759243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:41:32.551592  759243 out.go:374] Setting ErrFile to fd 2...
	I1026 14:41:32.551598  759243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:41:32.552073  759243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:41:32.552327  759243 out.go:368] Setting JSON to false
	I1026 14:41:32.552356  759243 mustload.go:65] Loading cluster: ha-410341
	I1026 14:41:32.554380  759243 config.go:182] Loaded profile config "ha-410341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:41:32.554410  759243 status.go:174] checking status of ha-410341 ...
	I1026 14:41:32.554062  759243 notify.go:220] Checking for updates...
	I1026 14:41:32.555393  759243 cli_runner.go:164] Run: docker container inspect ha-410341 --format={{.State.Status}}
	I1026 14:41:32.578611  759243 status.go:371] ha-410341 host status = "Running" (err=<nil>)
	I1026 14:41:32.578633  759243 host.go:66] Checking if "ha-410341" exists ...
	I1026 14:41:32.578917  759243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-410341
	I1026 14:41:32.599064  759243 host.go:66] Checking if "ha-410341" exists ...
	I1026 14:41:32.599533  759243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:41:32.599582  759243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-410341
	I1026 14:41:32.622492  759243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/ha-410341/id_rsa Username:docker}
	I1026 14:41:32.731026  759243 ssh_runner.go:195] Run: systemctl --version
	I1026 14:41:32.739349  759243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:41:32.753706  759243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:41:32.813595  759243 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-26 14:41:32.802617613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:41:32.814180  759243 kubeconfig.go:125] found "ha-410341" server: "https://192.168.49.254:8443"
	I1026 14:41:32.814217  759243 api_server.go:166] Checking apiserver status ...
	I1026 14:41:32.814268  759243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:41:32.826837  759243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup
	I1026 14:41:32.835841  759243 api_server.go:182] apiserver freezer: "6:freezer:/docker/82bfae1ab178a13682deaed29d094b2349b50ffb08c938c324fef9e058cb67d3/crio/crio-d67911469e092200f5ad8db63317a4870be1eb039c95a518f4c4f2fac22b705f"
	I1026 14:41:32.835929  759243 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/82bfae1ab178a13682deaed29d094b2349b50ffb08c938c324fef9e058cb67d3/crio/crio-d67911469e092200f5ad8db63317a4870be1eb039c95a518f4c4f2fac22b705f/freezer.state
	I1026 14:41:32.844681  759243 api_server.go:204] freezer state: "THAWED"
	I1026 14:41:32.844770  759243 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 14:41:32.853732  759243 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 14:41:32.853761  759243 status.go:463] ha-410341 apiserver status = Running (err=<nil>)
	I1026 14:41:32.853773  759243 status.go:176] ha-410341 status: &{Name:ha-410341 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:41:32.853791  759243 status.go:174] checking status of ha-410341-m02 ...
	I1026 14:41:32.854128  759243 cli_runner.go:164] Run: docker container inspect ha-410341-m02 --format={{.State.Status}}
	I1026 14:41:32.873508  759243 status.go:371] ha-410341-m02 host status = "Stopped" (err=<nil>)
	I1026 14:41:32.873534  759243 status.go:384] host is not running, skipping remaining checks
	I1026 14:41:32.873541  759243 status.go:176] ha-410341-m02 status: &{Name:ha-410341-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:41:32.873563  759243 status.go:174] checking status of ha-410341-m03 ...
	I1026 14:41:32.873893  759243 cli_runner.go:164] Run: docker container inspect ha-410341-m03 --format={{.State.Status}}
	I1026 14:41:32.892038  759243 status.go:371] ha-410341-m03 host status = "Running" (err=<nil>)
	I1026 14:41:32.892064  759243 host.go:66] Checking if "ha-410341-m03" exists ...
	I1026 14:41:32.892377  759243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-410341-m03
	I1026 14:41:32.910482  759243 host.go:66] Checking if "ha-410341-m03" exists ...
	I1026 14:41:32.910870  759243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:41:32.910916  759243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-410341-m03
	I1026 14:41:32.928914  759243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/ha-410341-m03/id_rsa Username:docker}
	I1026 14:41:33.038584  759243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:41:33.053173  759243 kubeconfig.go:125] found "ha-410341" server: "https://192.168.49.254:8443"
	I1026 14:41:33.053203  759243 api_server.go:166] Checking apiserver status ...
	I1026 14:41:33.053245  759243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:41:33.066029  759243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	I1026 14:41:33.075513  759243 api_server.go:182] apiserver freezer: "6:freezer:/docker/8a21164fcadcc5683c45f2f26a21e3ae263bbd903c05f04ca9eeaae56a0a3c6d/crio/crio-f7926a2c923376e280588b2cb037fcc0bea64061bacbc6acce19db58d2771b5b"
	I1026 14:41:33.075595  759243 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8a21164fcadcc5683c45f2f26a21e3ae263bbd903c05f04ca9eeaae56a0a3c6d/crio/crio-f7926a2c923376e280588b2cb037fcc0bea64061bacbc6acce19db58d2771b5b/freezer.state
	I1026 14:41:33.084117  759243 api_server.go:204] freezer state: "THAWED"
	I1026 14:41:33.084147  759243 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 14:41:33.092555  759243 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 14:41:33.092584  759243 status.go:463] ha-410341-m03 apiserver status = Running (err=<nil>)
	I1026 14:41:33.092594  759243 status.go:176] ha-410341-m03 status: &{Name:ha-410341-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:41:33.092611  759243 status.go:174] checking status of ha-410341-m04 ...
	I1026 14:41:33.093046  759243 cli_runner.go:164] Run: docker container inspect ha-410341-m04 --format={{.State.Status}}
	I1026 14:41:33.111446  759243 status.go:371] ha-410341-m04 host status = "Running" (err=<nil>)
	I1026 14:41:33.111473  759243 host.go:66] Checking if "ha-410341-m04" exists ...
	I1026 14:41:33.111784  759243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-410341-m04
	I1026 14:41:33.130155  759243 host.go:66] Checking if "ha-410341-m04" exists ...
	I1026 14:41:33.130470  759243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:41:33.130514  759243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-410341-m04
	I1026 14:41:33.148614  759243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/ha-410341-m04/id_rsa Username:docker}
	I1026 14:41:33.259232  759243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:41:33.272659  759243 status.go:176] ha-410341-m04 status: &{Name:ha-410341-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 node start m02 --alsologtostderr -v 5: (30.003862181s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5: (1.271649662s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.35225987s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 stop --alsologtostderr -v 5: (27.773981018s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 start --wait true --alsologtostderr -v 5
E1026 14:42:39.979898  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:42:44.517445  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 start --wait true --alsologtostderr -v 5: (1m39.713566203s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 node delete m03 --alsologtostderr -v 5: (8.267525258s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 stop --alsologtostderr -v 5
E1026 14:45:00.654033  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 stop --alsologtostderr -v 5: (36.285726085s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5: exit status 7 (123.870658ms)

                                                
                                                
-- stdout --
	ha-410341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-410341-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-410341-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:45:00.949262  770731 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:45:00.949391  770731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:45:00.949402  770731 out.go:374] Setting ErrFile to fd 2...
	I1026 14:45:00.949406  770731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:45:00.949668  770731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:45:00.949869  770731 out.go:368] Setting JSON to false
	I1026 14:45:00.949907  770731 mustload.go:65] Loading cluster: ha-410341
	I1026 14:45:00.950007  770731 notify.go:220] Checking for updates...
	I1026 14:45:00.950333  770731 config.go:182] Loaded profile config "ha-410341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:45:00.950354  770731 status.go:174] checking status of ha-410341 ...
	I1026 14:45:00.951234  770731 cli_runner.go:164] Run: docker container inspect ha-410341 --format={{.State.Status}}
	I1026 14:45:00.971607  770731 status.go:371] ha-410341 host status = "Stopped" (err=<nil>)
	I1026 14:45:00.971634  770731 status.go:384] host is not running, skipping remaining checks
	I1026 14:45:00.971641  770731 status.go:176] ha-410341 status: &{Name:ha-410341 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:45:00.971675  770731 status.go:174] checking status of ha-410341-m02 ...
	I1026 14:45:00.972001  770731 cli_runner.go:164] Run: docker container inspect ha-410341-m02 --format={{.State.Status}}
	I1026 14:45:00.993545  770731 status.go:371] ha-410341-m02 host status = "Stopped" (err=<nil>)
	I1026 14:45:00.993574  770731 status.go:384] host is not running, skipping remaining checks
	I1026 14:45:00.993592  770731 status.go:176] ha-410341-m02 status: &{Name:ha-410341-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:45:00.993615  770731 status.go:174] checking status of ha-410341-m04 ...
	I1026 14:45:00.993914  770731 cli_runner.go:164] Run: docker container inspect ha-410341-m04 --format={{.State.Status}}
	I1026 14:45:01.017255  770731 status.go:371] ha-410341-m04 host status = "Stopped" (err=<nil>)
	I1026 14:45:01.017279  770731 status.go:384] host is not running, skipping remaining checks
	I1026 14:45:01.017287  770731 status.go:176] ha-410341-m04 status: &{Name:ha-410341-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (66.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1026 14:45:28.364853  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m5.430133141s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (66.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (54.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 node add --control-plane --alsologtostderr -v 5: (53.78657262s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-410341 status --alsologtostderr -v 5: (1.079016173s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (54.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.107093652s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-372963 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1026 14:47:39.984974  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-372963 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.86479154s)
--- PASS: TestJSONOutput/start/Command (77.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-372963 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-372963 --output=json --user=testUser: (5.867637305s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-274392 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-274392 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.418468ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7dae70b4-9060-4380-afc3-bbec70bb2937","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-274392] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cb90d9b-ba3f-4a88-b444-1715c1826b55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"44bd0fb7-fd9c-4198-932f-50d4f179f25a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"00ed5423-44d5-4265-ba39-89c912586fbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig"}}
	{"specversion":"1.0","id":"f6140897-4361-4793-bcfa-32d52a320269","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube"}}
	{"specversion":"1.0","id":"ab0f9314-93ec-4035-b869-5048b70e6a37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"662c5179-e57c-4628-83e6-b7c9643d3d91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c8a88f87-cf16-4858-ab5c-8d54af69a80c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-274392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-274392
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-973815 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-973815 --network=: (41.970343808s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-973815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-973815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-973815: (2.171452327s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-538951 --network=bridge
E1026 14:50:00.654172  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-538951 --network=bridge: (35.794549879s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-538951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-538951
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-538951: (2.171435568s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.99s)

                                                
                                    
x
+
TestKicExistingNetwork (37.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1026 14:50:08.316104  715440 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1026 14:50:08.332794  715440 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1026 14:50:08.332869  715440 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1026 14:50:08.332886  715440 cli_runner.go:164] Run: docker network inspect existing-network
W1026 14:50:08.354086  715440 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1026 14:50:08.354116  715440 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1026 14:50:08.354128  715440 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1026 14:50:08.354240  715440 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 14:50:08.372256  715440 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0def339861f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:3e:da:26:c3:bc} reservation:<nil>}
I1026 14:50:08.372588  715440 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017a4f60}
I1026 14:50:08.372630  715440 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1026 14:50:08.372683  715440 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1026 14:50:08.439126  715440 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-150229 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-150229 --network=existing-network: (35.330187024s)
helpers_test.go:175: Cleaning up "existing-network-150229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-150229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-150229: (2.202526224s)
I1026 14:50:45.988366  715440 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.69s)

                                                
                                    
x
+
TestKicCustomSubnet (38.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-131841 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-131841 --subnet=192.168.60.0/24: (36.519274172s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-131841 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-131841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-131841
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-131841: (2.248021119s)
--- PASS: TestKicCustomSubnet (38.80s)

                                                
                                    
x
+
TestKicStaticIP (40.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-668360 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-668360 --static-ip=192.168.200.200: (37.734541711s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-668360 ip
helpers_test.go:175: Cleaning up "static-ip-668360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-668360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-668360: (2.256070846s)
--- PASS: TestKicStaticIP (40.17s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (76.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-353801 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-353801 --driver=docker  --container-runtime=crio: (34.402662593s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-356714 --driver=docker  --container-runtime=crio
E1026 14:52:39.979937  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-356714 --driver=docker  --container-runtime=crio: (36.537691513s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-353801
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-356714
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-356714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-356714
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-356714: (2.179006702s)
helpers_test.go:175: Cleaning up "first-353801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-353801
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-353801: (2.059391154s)
--- PASS: TestMinikubeProfile (76.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-798013 --memory=3072 --mount-string /tmp/TestMountStartserial3448950263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-798013 --memory=3072 --mount-string /tmp/TestMountStartserial3448950263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.343125454s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-798013 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-800245 --memory=3072 --mount-string /tmp/TestMountStartserial3448950263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-800245 --memory=3072 --mount-string /tmp/TestMountStartserial3448950263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.919490113s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-800245 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-798013 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-798013 --alsologtostderr -v=5: (1.748617803s)
--- PASS: TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-800245 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-800245
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-800245: (1.296251135s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-800245
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-800245: (7.121652263s)
--- PASS: TestMountStart/serial/RestartStopped (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-800245 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-520131 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1026 14:55:00.654320  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:55:43.054866  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-520131 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.77563179s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-520131 -- rollout status deployment/busybox: (4.491463397s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-brdnj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-vzgb5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-brdnj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-vzgb5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-brdnj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-vzgb5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-brdnj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-brdnj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-vzgb5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-520131 -- exec busybox-7b57f96db7-vzgb5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-520131 -v=5 --alsologtostderr
E1026 14:56:23.728148  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-520131 -v=5 --alsologtostderr: (57.653693911s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-520131 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp testdata/cp-test.txt multinode-520131:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3990214313/001/cp-test_multinode-520131.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131:/home/docker/cp-test.txt multinode-520131-m02:/home/docker/cp-test_multinode-520131_multinode-520131-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m02 "sudo cat /home/docker/cp-test_multinode-520131_multinode-520131-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131:/home/docker/cp-test.txt multinode-520131-m03:/home/docker/cp-test_multinode-520131_multinode-520131-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m03 "sudo cat /home/docker/cp-test_multinode-520131_multinode-520131-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp testdata/cp-test.txt multinode-520131-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3990214313/001/cp-test_multinode-520131-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131-m02:/home/docker/cp-test.txt multinode-520131:/home/docker/cp-test_multinode-520131-m02_multinode-520131.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131 "sudo cat /home/docker/cp-test_multinode-520131-m02_multinode-520131.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131-m02:/home/docker/cp-test.txt multinode-520131-m03:/home/docker/cp-test_multinode-520131-m02_multinode-520131-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m03 "sudo cat /home/docker/cp-test_multinode-520131-m02_multinode-520131-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp testdata/cp-test.txt multinode-520131-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3990214313/001/cp-test_multinode-520131-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131-m03:/home/docker/cp-test.txt multinode-520131:/home/docker/cp-test_multinode-520131-m03_multinode-520131.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131 "sudo cat /home/docker/cp-test_multinode-520131-m03_multinode-520131.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 cp multinode-520131-m03:/home/docker/cp-test.txt multinode-520131-m02:/home/docker/cp-test_multinode-520131-m03_multinode-520131-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 ssh -n multinode-520131-m02 "sudo cat /home/docker/cp-test_multinode-520131-m03_multinode-520131-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-520131 node stop m03: (1.32308248s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-520131 status: exit status 7 (547.255759ms)

                                                
                                                
-- stdout --
	multinode-520131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-520131-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-520131-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr: exit status 7 (537.695594ms)

                                                
                                                
-- stdout --
	multinode-520131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-520131-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-520131-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:57:26.078467  821180 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:57:26.078605  821180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:57:26.078616  821180 out.go:374] Setting ErrFile to fd 2...
	I1026 14:57:26.078621  821180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:57:26.079014  821180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:57:26.079560  821180 out.go:368] Setting JSON to false
	I1026 14:57:26.079630  821180 mustload.go:65] Loading cluster: multinode-520131
	I1026 14:57:26.079699  821180 notify.go:220] Checking for updates...
	I1026 14:57:26.080100  821180 config.go:182] Loaded profile config "multinode-520131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:57:26.080136  821180 status.go:174] checking status of multinode-520131 ...
	I1026 14:57:26.080797  821180 cli_runner.go:164] Run: docker container inspect multinode-520131 --format={{.State.Status}}
	I1026 14:57:26.100474  821180 status.go:371] multinode-520131 host status = "Running" (err=<nil>)
	I1026 14:57:26.100497  821180 host.go:66] Checking if "multinode-520131" exists ...
	I1026 14:57:26.100897  821180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-520131
	I1026 14:57:26.126462  821180 host.go:66] Checking if "multinode-520131" exists ...
	I1026 14:57:26.126823  821180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:57:26.126879  821180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-520131
	I1026 14:57:26.145943  821180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33672 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/multinode-520131/id_rsa Username:docker}
	I1026 14:57:26.250805  821180 ssh_runner.go:195] Run: systemctl --version
	I1026 14:57:26.258194  821180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:57:26.272056  821180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:57:26.327171  821180 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-26 14:57:26.317847602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 14:57:26.327778  821180 kubeconfig.go:125] found "multinode-520131" server: "https://192.168.67.2:8443"
	I1026 14:57:26.327811  821180 api_server.go:166] Checking apiserver status ...
	I1026 14:57:26.327866  821180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:57:26.339173  821180 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	I1026 14:57:26.347729  821180 api_server.go:182] apiserver freezer: "6:freezer:/docker/120fec4e3d48c233f9a70ca7436ac840f3526e25e7a5d4ba5ad8b33a60660490/crio/crio-714f32af0406d26af861ba84dee2b35e55e04975f2e7e8aa208bb517d3bca633"
	I1026 14:57:26.347797  821180 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/120fec4e3d48c233f9a70ca7436ac840f3526e25e7a5d4ba5ad8b33a60660490/crio/crio-714f32af0406d26af861ba84dee2b35e55e04975f2e7e8aa208bb517d3bca633/freezer.state
	I1026 14:57:26.355548  821180 api_server.go:204] freezer state: "THAWED"
	I1026 14:57:26.355588  821180 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1026 14:57:26.363807  821180 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1026 14:57:26.363834  821180 status.go:463] multinode-520131 apiserver status = Running (err=<nil>)
	I1026 14:57:26.363845  821180 status.go:176] multinode-520131 status: &{Name:multinode-520131 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:57:26.363862  821180 status.go:174] checking status of multinode-520131-m02 ...
	I1026 14:57:26.364179  821180 cli_runner.go:164] Run: docker container inspect multinode-520131-m02 --format={{.State.Status}}
	I1026 14:57:26.382379  821180 status.go:371] multinode-520131-m02 host status = "Running" (err=<nil>)
	I1026 14:57:26.382406  821180 host.go:66] Checking if "multinode-520131-m02" exists ...
	I1026 14:57:26.382691  821180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-520131-m02
	I1026 14:57:26.399117  821180 host.go:66] Checking if "multinode-520131-m02" exists ...
	I1026 14:57:26.399440  821180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:57:26.399482  821180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-520131-m02
	I1026 14:57:26.423168  821180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/21664-713593/.minikube/machines/multinode-520131-m02/id_rsa Username:docker}
	I1026 14:57:26.530324  821180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:57:26.543347  821180 status.go:176] multinode-520131-m02 status: &{Name:multinode-520131-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:57:26.543381  821180 status.go:174] checking status of multinode-520131-m03 ...
	I1026 14:57:26.543686  821180 cli_runner.go:164] Run: docker container inspect multinode-520131-m03 --format={{.State.Status}}
	I1026 14:57:26.561198  821180 status.go:371] multinode-520131-m03 host status = "Stopped" (err=<nil>)
	I1026 14:57:26.561227  821180 status.go:384] host is not running, skipping remaining checks
	I1026 14:57:26.561234  821180 status.go:176] multinode-520131-m03 status: &{Name:multinode-520131-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-520131 node start m03 -v=5 --alsologtostderr: (7.636363739s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-520131
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-520131
E1026 14:57:39.981247  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-520131: (25.276272516s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-520131 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-520131 --wait=true -v=5 --alsologtostderr: (52.660186778s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-520131
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-520131 node delete m03: (4.951418162s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-520131 stop: (24.056732272s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-520131 status: exit status 7 (96.274278ms)

                                                
                                                
-- stdout --
	multinode-520131
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-520131-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr: exit status 7 (93.562259ms)

                                                
                                                
-- stdout --
	multinode-520131
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-520131-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:59:22.930528  828947 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:59:22.930707  828947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:59:22.930719  828947 out.go:374] Setting ErrFile to fd 2...
	I1026 14:59:22.930724  828947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:59:22.930980  828947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 14:59:22.931181  828947 out.go:368] Setting JSON to false
	I1026 14:59:22.931215  828947 mustload.go:65] Loading cluster: multinode-520131
	I1026 14:59:22.931317  828947 notify.go:220] Checking for updates...
	I1026 14:59:22.931611  828947 config.go:182] Loaded profile config "multinode-520131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:59:22.931630  828947 status.go:174] checking status of multinode-520131 ...
	I1026 14:59:22.932421  828947 cli_runner.go:164] Run: docker container inspect multinode-520131 --format={{.State.Status}}
	I1026 14:59:22.951173  828947 status.go:371] multinode-520131 host status = "Stopped" (err=<nil>)
	I1026 14:59:22.951194  828947 status.go:384] host is not running, skipping remaining checks
	I1026 14:59:22.951216  828947 status.go:176] multinode-520131 status: &{Name:multinode-520131 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:59:22.951248  828947 status.go:174] checking status of multinode-520131-m02 ...
	I1026 14:59:22.951539  828947 cli_runner.go:164] Run: docker container inspect multinode-520131-m02 --format={{.State.Status}}
	I1026 14:59:22.975116  828947 status.go:371] multinode-520131-m02 host status = "Stopped" (err=<nil>)
	I1026 14:59:22.975137  828947 status.go:384] host is not running, skipping remaining checks
	I1026 14:59:22.975161  828947 status.go:176] multinode-520131-m02 status: &{Name:multinode-520131-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-520131 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1026 15:00:00.656968  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-520131 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (55.132633111s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-520131 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-520131
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-520131-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-520131-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.150257ms)

                                                
                                                
-- stdout --
	* [multinode-520131-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-520131-m02' is duplicated with machine name 'multinode-520131-m02' in profile 'multinode-520131'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-520131-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-520131-m03 --driver=docker  --container-runtime=crio: (33.108368621s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-520131
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-520131: exit status 80 (334.612225ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-520131 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-520131-m03 already exists in multinode-520131-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-520131-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-520131-m03: (2.055424698s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.65s)

                                                
                                    
x
+
TestPreload (129.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-427418 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-427418 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m5.163906099s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-427418 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-427418 image pull gcr.io/k8s-minikube/busybox: (2.205875951s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-427418
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-427418: (5.934066446s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-427418 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1026 15:02:39.979650  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-427418 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.169493055s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-427418 image list
helpers_test.go:175: Cleaning up "test-preload-427418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-427418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-427418: (2.482710863s)
--- PASS: TestPreload (129.19s)

                                                
                                    
x
+
TestScheduledStopUnix (106.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-403779 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-403779 --memory=3072 --driver=docker  --container-runtime=crio: (29.985557897s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-403779 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-403779 -n scheduled-stop-403779
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-403779 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 15:03:38.489435  715440 retry.go:31] will retry after 136.121µs: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.490617  715440 retry.go:31] will retry after 131.415µs: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.491773  715440 retry.go:31] will retry after 165.155µs: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.492864  715440 retry.go:31] will retry after 472.008µs: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.493975  715440 retry.go:31] will retry after 441.316µs: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.495134  715440 retry.go:31] will retry after 687.313µs: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.496252  715440 retry.go:31] will retry after 1.06453ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.497377  715440 retry.go:31] will retry after 1.725428ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.499564  715440 retry.go:31] will retry after 1.991449ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.501740  715440 retry.go:31] will retry after 4.293496ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.506930  715440 retry.go:31] will retry after 4.714836ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.512855  715440 retry.go:31] will retry after 10.879209ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.524116  715440 retry.go:31] will retry after 6.875669ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.531842  715440 retry.go:31] will retry after 18.331843ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
I1026 15:03:38.551139  715440 retry.go:31] will retry after 42.449397ms: open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/scheduled-stop-403779/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-403779 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-403779 -n scheduled-stop-403779
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-403779
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-403779 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-403779
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-403779: exit status 7 (68.480843ms)

                                                
                                                
-- stdout --
	scheduled-stop-403779
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-403779 -n scheduled-stop-403779
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-403779 -n scheduled-stop-403779: exit status 7 (68.443247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-403779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-403779
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-403779: (4.635525818s)
--- PASS: TestScheduledStopUnix (106.24s)

                                                
                                    
x
+
TestInsufficientStorage (11.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-838183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1026 15:05:00.653937  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-838183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.684042527s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46f2d958-fb5c-4d4f-ab05-fa456cdab499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-838183] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"401dc3db-d5f0-4566-bfe0-bb6f4fc008fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"0994c5c0-0e8e-4c4c-9fa6-533107005d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80cf8f47-b235-42c4-b9d7-d985cd92ab69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig"}}
	{"specversion":"1.0","id":"c94d65b4-9ca3-4123-b975-7dbf30f86d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube"}}
	{"specversion":"1.0","id":"8f94a444-6d0f-4e25-908a-222c6fda728f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7919f9a2-3b6c-446e-af43-eaceb9560237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cedbf25b-616f-49e2-9667-ca0315545d28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d67b616b-bb59-47f1-9788-0553bd33c734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0c40f34c-aa67-42df-b984-f4c9d0176e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"39a4f720-2c97-4ebe-8346-4f24af7c03da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fdf5a704-4bcd-4b4c-ad00-c9b36814a9ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-838183\" primary control-plane node in \"insufficient-storage-838183\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2ff80a4-d91d-46a2-9ad1-53d12c5a8211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c2d448e-e533-4029-b817-c3941dc0c0a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"59d82839-6dc7-42ba-9aec-1debe7526c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-838183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-838183 --output=json --layout=cluster: exit status 7 (309.550498ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-838183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-838183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 15:05:03.182064  845072 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-838183" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-838183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-838183 --output=json --layout=cluster: exit status 7 (315.899357ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-838183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-838183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 15:05:03.497724  845137 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-838183" does not appear in /home/jenkins/minikube-integration/21664-713593/kubeconfig
	E1026 15:05:03.508268  845137 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/insufficient-storage-838183/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-838183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-838183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-838183: (1.993869093s)
--- PASS: TestInsufficientStorage (11.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.7941195 start -p running-upgrade-737377 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.7941195 start -p running-upgrade-737377 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.779720812s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-737377 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-737377 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.908179821s)
helpers_test.go:175: Cleaning up "running-upgrade-737377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-737377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-737377: (2.147781815s)
--- PASS: TestRunningBinaryUpgrade (53.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (216.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.389959158s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-625210
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-625210: (1.447842679s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-625210 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-625210 status --format={{.Host}}: exit status 7 (93.377016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1026 15:07:39.984826  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m12.788566796s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-625210 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (102.707984ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-625210] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-625210
	    minikube start -p kubernetes-upgrade-625210 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6252102 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-625210 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1026 15:10:00.654322  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-625210 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.834339996s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-625210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-625210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-625210: (2.397013485s)
--- PASS: TestKubernetesUpgrade (216.17s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1181715562 start -p missing-upgrade-858783 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1181715562 start -p missing-upgrade-858783 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.349249884s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-858783
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-858783
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-858783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-858783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.462903672s)
helpers_test.go:175: Cleaning up "missing-upgrade-858783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-858783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-858783: (2.282548547s)
--- PASS: TestMissingContainerUpgrade (123.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-195451 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-195451 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.980356ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-195451] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-195451 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-195451 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (53.088124998s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-195451 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-195451 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-195451 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.139960585s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-195451 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-195451 status -o json: exit status 2 (339.792915ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-195451","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-195451
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-195451: (2.238521039s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-195451 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-195451 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.295237515s)
--- PASS: TestNoKubernetes/serial/Start (9.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-195451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-195451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.219284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-195451
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-195451: (1.292777786s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-195451 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-195451 --driver=docker  --container-runtime=crio: (6.91423648s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-195451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-195451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.006163ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3493683491 start -p stopped-upgrade-532297 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3493683491 start -p stopped-upgrade-532297 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.025656757s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3493683491 -p stopped-upgrade-532297 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3493683491 -p stopped-upgrade-532297 stop: (1.227975989s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-532297 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-532297 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.678033265s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-532297
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-532297: (1.258365077s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (86.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-013921 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-013921 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m26.791189197s)
--- PASS: TestPause/serial/Start (86.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-013921 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-013921 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.739164928s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-337407 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-337407 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (341.373003ms)

                                                
                                                
-- stdout --
	* [false-337407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:10:51.790696  877783 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:10:51.795085  877783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:51.795101  877783 out.go:374] Setting ErrFile to fd 2...
	I1026 15:10:51.795106  877783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:51.795434  877783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-713593/.minikube/bin
	I1026 15:10:51.795922  877783 out.go:368] Setting JSON to false
	I1026 15:10:51.796933  877783 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17604,"bootTime":1761473848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1026 15:10:51.797011  877783 start.go:141] virtualization:  
	I1026 15:10:51.800807  877783 out.go:179] * [false-337407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1026 15:10:51.804791  877783 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:10:51.804964  877783 notify.go:220] Checking for updates...
	I1026 15:10:51.810661  877783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:10:51.813513  877783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-713593/kubeconfig
	I1026 15:10:51.816406  877783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-713593/.minikube
	I1026 15:10:51.819226  877783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 15:10:51.822149  877783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:10:51.825576  877783 config.go:182] Loaded profile config "pause-013921": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:51.825702  877783 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:10:51.914916  877783 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1026 15:10:51.915070  877783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:10:52.034226  877783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-26 15:10:52.023482833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1026 15:10:52.034339  877783 docker.go:318] overlay module found
	I1026 15:10:52.037758  877783 out.go:179] * Using the docker driver based on user configuration
	I1026 15:10:52.040838  877783 start.go:305] selected driver: docker
	I1026 15:10:52.040858  877783 start.go:925] validating driver "docker" against <nil>
	I1026 15:10:52.040872  877783 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:10:52.044291  877783 out.go:203] 
	W1026 15:10:52.047150  877783 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 15:10:52.050133  877783 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-337407 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-337407" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-013921
contexts:
- context:
cluster: pause-013921
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-013921
name: pause-013921
current-context: pause-013921
kind: Config
preferences: {}
users:
- name: pause-013921
user:
client-certificate: /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/pause-013921/client.crt
client-key: /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/pause-013921/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-337407

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-337407"

                                                
                                                
----------------------- debugLogs end: false-337407 [took: 4.050069315s] --------------------------------
helpers_test.go:175: Cleaning up "false-337407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-337407
--- PASS: TestNetworkPlugins/group/false (4.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1026 15:12:23.056924  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:12:39.979647  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:13:03.729419  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m4.61262408s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-304880 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e84a2428-1939-453d-bca6-7b2884f6ea51] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e84a2428-1939-453d-bca6-7b2884f6ea51] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004262017s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-304880 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-304880 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-304880 --alsologtostderr -v=3: (12.012419364s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880: exit status 7 (77.111099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-304880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-304880 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.400851667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-304880 -n old-k8s-version-304880
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t54nl" [835824df-847d-402e-b2b4-fa53792bffa6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004553409s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t54nl" [835824df-847d-402e-b2b4-fa53792bffa6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00504632s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-304880 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-304880 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:15:00.654491  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.926641367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-018497 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e2e9efa-2562-4274-8e98-1f31c6a5039f] Pending
helpers_test.go:352: "busybox" [3e2e9efa-2562-4274-8e98-1f31c6a5039f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e2e9efa-2562-4274-8e98-1f31c6a5039f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003277616s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-018497 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-018497 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-018497 --alsologtostderr -v=3: (12.142922032s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497: exit status 7 (120.733711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-018497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (61.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-018497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.485326012s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-018497 -n embed-certs-018497
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (61.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:17:39.980344  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/addons-501661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.410055727s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-85vnc" [2a3da6ff-3ac6-4c07-bf84-71014b0de0c8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004390638s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-85vnc" [2a3da6ff-3ac6-4c07-bf84-71014b0de0c8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003526819s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-018497 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-018497 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-954807 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9a8dabc7-7557-4a48-8806-6fd5fee80256] Pending
helpers_test.go:352: "busybox" [9a8dabc7-7557-4a48-8806-6fd5fee80256] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9a8dabc7-7557-4a48-8806-6fd5fee80256] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005514964s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-954807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.636623828s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-954807 --alsologtostderr -v=3
E1026 15:18:23.501346  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:23.507744  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:23.519181  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:23.540822  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:23.582237  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:23.663622  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:23.825001  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:24.146692  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-954807 --alsologtostderr -v=3: (12.212831411s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807: exit status 7 (101.711464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-954807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1026 15:18:24.788312  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:18:26.070011  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:28.632827  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:33.754490  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:43.996329  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:04.477691  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-954807 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.85940832s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-954807 -n no-preload-954807
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mns4v" [89db4534-81ce-41d2-b3fa-771b17a5d05b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003776951s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0f11c185-ade9-4c11-afe9-250f741f209d] Pending
helpers_test.go:352: "busybox" [0f11c185-ade9-4c11-afe9-250f741f209d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003951131s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mns4v" [89db4534-81ce-41d2-b3fa-771b17a5d05b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003327652s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-954807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-954807 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-494684 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-494684 --alsologtostderr -v=3: (12.531974648s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:19:45.439530  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.30436366s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684: exit status 7 (119.382686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-494684 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:20:00.654490  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/functional-707472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-494684 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.05547413s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-494684 -n default-k8s-diff-port-494684
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-810872 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-810872 --alsologtostderr -v=3: (2.139367184s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872: exit status 7 (80.390855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-810872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-810872 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.581989488s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-810872 -n newest-cni-810872
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f9ct2" [2313a016-1717-46d4-b96a-c1690b8d1d77] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003862743s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-810872 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f9ct2" [2313a016-1717-46d4-b96a-c1690b8d1d77] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00377003s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-494684 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.932856318s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-494684 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.481158155s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-337407 "pgrep -a kubelet"
I1026 15:22:28.954054  715440 config.go:182] Loaded profile config "auto-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-337407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2gr5d" [981b55b7-d65a-4724-874c-975fb9dc412a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2gr5d" [981b55b7-d65a-4724-874c-975fb9dc412a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003464583s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-zqc7p" [e9864ca6-dee8-4aa4-b973-acfe494d10e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004246174s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-337407 "pgrep -a kubelet"
I1026 15:22:44.388153  715440 config.go:182] Loaded profile config "kindnet-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-337407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6r682" [ced7745e-8805-4cc9-8cd5-e95dbda89113] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6r682" [ced7745e-8805-4cc9-8cd5-e95dbda89113] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004318674s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1026 15:23:01.664446  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:02.306683  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:03.592829  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:06.160033  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:11.282571  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m21.155394722s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1026 15:23:21.524568  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:23.500875  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:42.006752  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:23:51.207020  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/old-k8s-version-304880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.858654916s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-sphcb" [19d1c717-6531-4594-a525-43badc02859e] Running
E1026 15:24:22.968182  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:25.892364  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:25.898829  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:25.910218  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:25.931732  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:25.973125  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:26.054615  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:26.216089  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:26.537507  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:27.179560  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:24:28.461280  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004465932s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-337407 "pgrep -a kubelet"
I1026 15:24:28.988421  715440 config.go:182] Loaded profile config "calico-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-337407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bppcc" [3e8ca721-61b0-4509-986c-ca2f1ade3668] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 15:24:31.022575  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bppcc" [3e8ca721-61b0-4509-986c-ca2f1ade3668] Running
E1026 15:24:36.143896  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00460324s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-337407 "pgrep -a kubelet"
I1026 15:24:33.540911  715440 config.go:182] Loaded profile config "custom-flannel-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-337407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z4jv6" [78ce0311-f0b5-49e0-b62e-c31f7c770dc7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z4jv6" [78ce0311-f0b5-49e0-b62e-c31f7c770dc7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004421228s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1026 15:25:06.867823  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.817191301s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1026 15:25:44.889586  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:25:47.829823  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/default-k8s-diff-port-494684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m8.724047431s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gzp8c" [b05be5f8-8e7a-49df-9af3-24f38006fe3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004673579s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-337407 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-337407 "pgrep -a kubelet"
I1026 15:26:28.209801  715440 config.go:182] Loaded profile config "enable-default-cni-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-337407 replace --force -f testdata/netcat-deployment.yaml
I1026 15:26:28.370889  715440 config.go:182] Loaded profile config "flannel-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dk6wf" [400dc0f0-5065-4f4e-82b1-fad4b8c001f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dk6wf" [400dc0f0-5065-4f4e-82b1-fad4b8c001f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003320937s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-337407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g4f7x" [ae8d271e-9e1b-4194-894d-1d76ce96add2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g4f7x" [ae8d271e-9e1b-4194-894d-1d76ce96add2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003560688s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-337407 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (48.666981357s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-337407 "pgrep -a kubelet"
I1026 15:27:55.804556  715440 config.go:182] Loaded profile config "bridge-337407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-337407 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q64kz" [bb4d7154-f074-4d0c-9ac0-1f2135f1a019] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 15:27:58.486221  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/kindnet-337407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:28:01.019705  715440 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/no-preload-954807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-q64kz" [bb4d7154-f074-4d0c-9ac0-1f2135f1a019] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003321541s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-337407 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-337407 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-958542 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-958542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-958542
--- SKIP: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-934812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-934812
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-337407 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-337407" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-013921
contexts:
- context:
cluster: pause-013921
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-013921
name: pause-013921
current-context: pause-013921
kind: Config
preferences: {}
users:
- name: pause-013921
user:
client-certificate: /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/pause-013921/client.crt
client-key: /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/pause-013921/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-337407

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-337407"

                                                
                                                
----------------------- debugLogs end: kubenet-337407 [took: 5.505133896s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-337407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-337407
--- SKIP: TestNetworkPlugins/group/kubenet (5.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-337407 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-337407" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-713593/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-013921
contexts:
- context:
cluster: pause-013921
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-013921
name: pause-013921
current-context: pause-013921
kind: Config
preferences: {}
users:
- name: pause-013921
user:
client-certificate: /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/pause-013921/client.crt
client-key: /home/jenkins/minikube-integration/21664-713593/.minikube/profiles/pause-013921/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-337407

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-337407" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-337407"

                                                
                                                
----------------------- debugLogs end: cilium-337407 [took: 4.139549689s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-337407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-337407
--- SKIP: TestNetworkPlugins/group/cilium (4.31s)

                                                
                                    
Copied to clipboard